Return of EM: Entity-driven Answer Set Expansion for QA Evaluation
Lee, Dongryeol, Lee, Minwoo, Min, Kyungmin, Park, Joonsuk, Jung, Kyomin
–arXiv.org Artificial Intelligence
Recently, directly using large language models (LLMs) has been shown to be the most reliable method to evaluate QA models. However, it suffers from limited interpretability, high cost, and environmental harm. To address these, we propose to use soft exact match (EM) with entitydriven answer set expansion. Our approach expands the gold answer set to include diverse surface forms, based on the observation that the surface forms often follow particular patterns depending on the entity type. The experimental results show that our method outperforms traditional evaluation methods by a large margin. Moreover, the reliability of our evaluation method is comparable to that of LLM-based ones, while offering the benefits of high interpretability and reduced environmental harm.
arXiv.org Artificial Intelligence
Jun-11-2024
- Country:
- Africa (1.00)
- Asia > India (0.68)
- Europe > United Kingdom
- England (0.28)
- North America
- Canada
- British Columbia > Metro Vancouver Regional District
- Vancouver (0.28)
- Ontario (0.28)
- British Columbia > Metro Vancouver Regional District
- United States > Georgia
- Fulton County > Atlanta (0.28)
- Canada
- Genre:
- Research Report > New Finding (0.34)
- Industry:
- Automobiles & Trucks (0.68)
- Government > Regional Government
- Law (0.67)
- Leisure & Entertainment > Sports
- Boxing (0.67)
- Olympic Games (1.00)
- Soccer (0.68)
- Media > Television (1.00)
- Transportation (0.93)
- Technology: