PEDANTS (Precise Evaluations of Diverse Answer Nominee Text for Skinflints): Efficient Evaluation Analysis and Benchmarking for Open-Domain Question Answering
Li, Zongxia, Mondal, Ishani, Liang, Yijun, Nghiem, Huy, Boyd-Graber, Jordan Lee
–arXiv.org Artificial Intelligence
Question answering (QA) can only make progress if we know if an answer is correct, but for many of the most challenging and interesting QA examples, current efficient answer correctness (AC) metrics do not align with human judgments, particularly verbose, free-form answers from large language models (LLMs). There are two challenges: a lack of diverse evaluation data and that models are too big and non-transparent; LLM-based scorers correlate better with humans, but this expensive task has only been tested on limited QA datasets. We rectify these issues by providing guidelines and datasets for evaluating machine QA adopted from human QA community. We also propose an efficient, low-resource, and interpretable QA evaluation method more stable than an exact match and neural methods.
arXiv.org Artificial Intelligence
Jul-6-2024
- Country:
- Europe (1.00)
- North America > United States
- Minnesota > Hennepin County
- Minneapolis (0.14)
- New Jersey (0.28)
- Minnesota > Hennepin County
- Genre:
- Research Report (1.00)
- Industry:
- Technology: