The Case for Repeatable, Open, and Expert-Grounded Hallucination Benchmarks in Large Language Models
Norman, Justin D., Rivera, Michael U., Hughes, D. Alex
–arXiv.org Artificial Intelligence
Plausible, but inaccurate, tokens in model-generated text are widely believed to be pervasive and problematic for the responsible adoption of language models. Despite this concern, there is little scientific work that attempts to measure the prevalence of language model hallucination in a comprehensive way. In this paper, we argue that language models should be evaluated using repeatable, open, and domain-contextualized hallucination benchmarking. We present a taxonomy of hallucinations alongside a case study that demonstrates that when experts are absent from the early stages of data creation, the resulting hallucination metrics lack validity and practical utility.
arXiv.org Artificial Intelligence
Nov-6-2025
- Country:
- Europe > Belgium
- Brussels-Capital Region > Brussels (0.04)
- North America > United States
- Arizona (0.04)
- California > Alameda County
- Berkeley (0.04)
- Europe > Belgium
- Genre:
- Research Report (1.00)
- Industry:
- Government
- Health & Medicine > Therapeutic Area (1.00)
- Law (0.93)
- Technology: