Real-Time Evaluation Models for RAG: Who Detects Hallucinations Best?
–arXiv.org Artificial Intelligence
This article surveys Evaluation models to automatically detect hallucinations in Retrieval-Augmented Generation (RAG), and presents a comprehensive benchmark of their performance across six RAG applications. Methods included in our study include: LLM-as-a-Judge, Prometheus, Lynx, the Hughes Hallucination Evaluation Model (HHEM), and the Trustworthy Language Model (TLM). These approaches are all reference-free, requiring no ground-truth answers/labels to catch incorrect LLM responses. Our study reveals that, across diverse RAG applications, some of these approaches consistently detect incorrect RAG responses with high precision/recall.
arXiv.org Artificial Intelligence
Apr-7-2025
- Country:
- North America > United States (0.28)
- Genre:
- Research Report > New Finding (0.49)
- Industry:
- Health & Medicine (0.47)
- Technology: