AI hallucinates because it's trained to fake answers it doesn't know
Earlier today, OpenAI completed a controversial restructuring of its for-profit arm into a public benefit corporation: the latest gust in a whirlwind that has swept up hundreds of billions of dollars of global investment for artificial intelligence (AI) tools. But even as the AI company--founded as a nonprofit, now valued at 500 billion--completes its long-awaited restructuring, a nagging issue with its core offering remains unresolved: hallucinations. Large language models (LLMs) such as those that underpin OpenAI's popular ChatGPT platform are prone to confidently spouting factually incorrect statements. These blips are often attributed to bad input data, but in a preprint posted last month, a team from OpenAI and the Georgia Institute of Technology proves that even with flawless training data, LLMs can never be all-knowing--in part because some questions are just inherently unanswerable. However, that doesn't mean hallucinations are inevitable.
Oct-28-2025, 23:02:00 GMT
- Country:
- Europe > Netherlands
- South Holland > Delft (0.05)
- North America > United States
- Arizona (0.05)
- Illinois > Champaign County
- Urbana (0.05)
- Europe > Netherlands
- Genre:
- Research Report (0.30)
- Technology: