AI hallucinations are getting worse – and they're here to stay
AI chatbots from tech companies such as OpenAI and Google have been getting so-called reasoning upgrades over the past months – ideally to make them better at giving us answers we can trust, but recent testing suggests they are sometimes doing worse than previous models. The errors made by chatbots, known as "hallucinations", have been a problem from the start, and it is becoming clear we may never get rid of them. Hallucination is a blanket term for certain kinds of mistakes made by the large language models (LLMs) that power systems like OpenAI's ChatGPT or Google's Gemini. It is best known as a description of the way they sometimes present false information as true. But it can also refer to an AI-generated answer that is factually accurate, but not actually relevant to the question it was asked, or fails to follow instructions in some other way.
May-9-2025, 20:00:13 GMT