HAD: HAllucination Detection Language Models Based on a Comprehensive Hallucination Taxonomy
Xu, Fan, Hu, Xinyu, Yu, Zhenghan, Lin, Li, Zhang, Xu, Zhang, Yang, Zhou, Wei, Gu, Jinjie, Wan, Xiaojun
–arXiv.org Artificial Intelligence
The increasing reliance on natural language generation (NLG) models, particularly large language models, has raised concerns about the reliability and accuracy of their outputs. A key challenge is hallucination, where models produce plausible but incorrect information. As a result, hallucination detection has become a critical task. In this work, we introduce a comprehensive hallucination taxonomy with 11 categories across various NLG tasks and propose the HAllucination Detection (HAD) models https://github.com/pku0xff/HAD, which integrate hallucination detection, span-level identification, and correction into a single inference process. Trained on an elaborate synthetic dataset of about 90K samples, our HAD models are versatile and can be applied to various NLG tasks. We also carefully annotate a test set for hallucination detection, called HADTest, which contains 2,248 samples. Evaluations on in-domain and out-of-domain test sets show that our HAD models generally outperform the existing baselines, achieving state-of-the-art results on HaluEval, FactCHD, and FaithBench, confirming their robustness and versatility.
arXiv.org Artificial Intelligence
Oct-23-2025
- Country:
- Asia
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.04)
- Singapore (0.04)
- Middle East > UAE
- Europe
- Eastern Europe (0.04)
- Italy > Tuscany
- Florence (0.04)
- North America > United States
- Florida > Broward County > Fort Lauderdale (0.04)
- Asia
- Genre:
- Research Report (0.82)
- Technology: