Hallucination Stations: On Some Basic Limitations of Transformer-Based Language Models
–arXiv.org Artificial Intelligence
In this paper we explore hallucinations and related capability limitations in LLMs and LLM - based agents from the perspective of computational complexity . We show that beyond a certain complexity, LLMs are incapable of carrying out computational and agentic tasks or verifying the ir accuracy . Introduction With widespread adoption of transformer - based language models ("LLMs") in AI, there is significant interest in the limits of LLMs' capabilities, specifically so - called "hallucinations", occurrences in which LLMs provide spurious, factually incorrect or nonsensical [1, 2] information when prompted on certain subjects. Further more, there is growing interest in "agentic" uses of LLMs - that is, using LLMs to create "agents" that act autonomously or semi - autonomously to carry out various tasks, including tasks with applications in the real world. This makes it important to understand the types of tasks LLMs can and cannot perform.
arXiv.org Artificial Intelligence
Jul-16-2025
- Country:
- North America > United States > Texas > Travis County > Austin (0.04)
- Genre:
- Research Report (0.51)
- Technology: