DAFE: LLM-Based Evaluation Through Dynamic Arbitration for Free-Form Question-Answering
–arXiv.org Artificial Intelligence
Evaluating Large Language Models (LLMs) free-form generated responses remains a challenge due to their diverse and open-ended nature. Traditional supervised signal-based automatic metrics fail to capture semantic equivalence or handle the variability of open-ended responses, while human evaluation, though reliable, is resource-intensive. Leveraging LLMs as evaluators offers a promising alternative due to their strong language understanding and instruction-following capabilities. Taking advantage of these capabilities, we propose the Dynamic Arbitration Framework for Evaluation (DAFE), which employs two primary LLM-as-judges and engages a third arbitrator only in cases of disagreements. This selective arbitration prioritizes evaluation reliability while reducing unnecessary computational demands compared to conventional majority voting. DAFE utilizes task-specific reference answers with dynamic arbitration to enhance judgment accuracy, resulting in significant improvements in evaluation metrics such as Macro F1 and Cohen's Kappa. Through experiments, including a comprehensive human evaluation, we demonstrate DAFE's ability to provide consistent, scalable, and resource-efficient assessments, establishing it as a robust framework for evaluating free-form model outputs.
arXiv.org Artificial Intelligence
Mar-11-2025
- Country:
- Asia > Middle East
- UAE (0.14)
- Europe > Germany (0.46)
- North America > United States (0.46)
- Asia > Middle East
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Government (1.00)
- Health & Medicine (0.92)
- Law > Alternative Dispute Resolution (1.00)
- Technology: