Support Evaluation for the TREC 2024 RAG Track: Comparing Human versus LLM Judges
Thakur, Nandan, Pradeep, Ronak, Upadhyay, Shivani, Campos, Daniel, Craswell, Nick, Lin, Jimmy
–arXiv.org Artificial Intelligence
Retrieval-augmented generation (RAG) enables large language models (LLMs) to generate answers with citations from source documents containing "ground truth", thereby reducing system hallucinations. A crucial factor in RAG evaluation is "support", whether the information in the cited documents supports the answer. To this end, we conducted a large-scale comparative study of 45 participant submissions on 36 topics to the TREC 2024 RAG Track, comparing an automatic LLM judge (GPT-4o) against human judges for support assessment. We considered two conditions: (1) fully manual assessments from scratch and (2) manual assessments with post-editing of LLM predictions. Our results indicate that for 56% of the manual from-scratch assessments, human and GPT-4o predictions match perfectly (on a three-level scale), increasing to 72% in the manual with post-editing condition. Furthermore, by carefully analyzing the disagreements in an unbiased study, we found that an independent human judge correlates better with GPT-4o than a human judge, suggesting that LLM judges can be a reliable alternative for support assessment. To conclude, we provide a qualitative analysis of human and GPT-4o errors to help guide future iterations of support assessment.
arXiv.org Artificial Intelligence
Apr-22-2025
- Country:
- Africa > Ethiopia
- Addis Ababa > Addis Ababa (0.04)
- Asia
- Europe > Middle East
- Malta (0.04)
- North America
- Canada > British Columbia
- Vancouver (0.04)
- Mexico > Mexico City
- Mexico City (0.04)
- United States > Maryland
- Baltimore (0.04)
- Montgomery County > Gaithersburg (0.04)
- Canada > British Columbia
- Africa > Ethiopia
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Leisure & Entertainment (0.30)
- Media > Music (0.30)
- Technology: