Non-Programmers Can Label Programs Indirectly via Active Examples: A Case Study with Text-to-SQL
Zhong, Ruiqi, Snell, Charlie, Klein, Dan, Eisner, Jason
–arXiv.org Artificial Intelligence
Can non-programmers annotate natural language utterances with complex programs that represent their meaning? We introduce APEL, a framework in which non-programmers select among candidate programs generated by a seed semantic parser (e.g., Codex). Since they cannot understand the candidate programs, we ask them to select indirectly by examining the programs' input-ouput examples. For each utterance, APEL actively searches for a simple input on which the candidate programs tend to produce different outputs. It then asks the non-programmers only to choose the appropriate output, thus allowing us to infer which program is correct and could be used to fine-tune the parser. As a first case study, we recruited human non-programmers to use APEL to re-annotate SPIDER, a text-to-SQL dataset. Our approach achieved the same annotation accuracy as the original expert annotators (75%) and exposed many subtle errors in the original annotations.
arXiv.org Artificial Intelligence
Oct-23-2023
- Country:
- Europe (1.00)
- North America > United States
- California (0.14)
- Genre:
- Research Report (1.00)
- Industry:
- Education (0.67)
- Technology: