Mitigating Hallucination in Large Language Models (LLMs): An Application-Oriented Survey on RAG, Reasoning, and Agentic Systems
Li, Yihan, Fu, Xiyuan, Verma, Ghanshyam, Buitelaar, Paul, Liu, Mingming
–arXiv.org Artificial Intelligence
Hallucination remains one of the key obstacles to the reliable deployment of large language models (LLMs), particularly in real-world applications. Among various mitigation strategies, Retrieval-Augmented Generation (RAG) and reasoning enhancement have emerged as two of the most effective and widely adopted approaches, marking a shift from merely suppressing hallucinations to balancing creativity and reliability. However, their synergistic potential and underlying mechanisms for hallucination mitigation have not yet been systematically examined. This survey adopts an application-oriented perspective of capability enhancement to analyze how RAG, reasoning enhancement, and their integration in Agentic Systems mitigate hallucinations. We propose a taxonomy distinguishing knowledge-based and logic-based hallucinations, systematically examine how RAG and reasoning address each, and present a unified framework supported by real-world applications, evaluations, and benchmarks.
arXiv.org Artificial Intelligence
Oct-29-2025
- Country:
- Asia
- China > Hubei Province
- Wuhan (0.04)
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.04)
- China > Hubei Province
- Europe
- Ireland
- Connaught > County Galway
- Galway (0.04)
- Leinster > County Dublin
- Dublin (0.04)
- Connaught > County Galway
- Italy > Calabria
- Catanzaro Province > Catanzaro (0.04)
- Ireland
- North America
- Dominican Republic (0.04)
- United States > Florida
- Miami-Dade County > Miami (0.04)
- Asia
- Genre:
- Overview (1.00)
- Research Report (1.00)
- Industry:
- Health & Medicine > Therapeutic Area (0.46)
- Technology: