Hallucination is Inevitable for LLMs with the Open World Assumption
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) exhibit impressive linguistic competence but also produce inaccurate or fabricated outputs, often called ``hallucinations''. Engineering approaches usually regard hallucination as a defect to be minimized, while formal analyses have argued for its theoretical inevitability. Yet both perspectives remain incomplete when considering the conditions required for artificial general intelligence (AGI). This paper reframes ``hallucination'' as a manifestation of the generalization problem. Under the Closed World assumption, where training and test distributions are consistent, hallucinations may be mitigated. Under the Open World assumption, however, where the environment is unbounded, hallucinations become inevitable. This paper further develops a classification of hallucination, distinguishing cases that may be corrected from those that appear unavoidable under open-world conditions. On this basis, it suggests that ``hallucination'' should be approached not merely as an engineering defect but as a structural feature to be tolerated and made compatible with human intelligence.
arXiv.org Artificial Intelligence
Oct-8-2025
- Country:
- Asia
- China > Beijing
- Beijing (0.04)
- Middle East > Republic of Türkiye (0.04)
- China > Beijing
- North America > United States (0.04)
- Asia
- Genre:
- Research Report (0.50)
- Technology: