Shining a Light on AI Hallucinations

Communications of the ACM 

The ability of artificial intelligence (AI) to sift through mountains of information and deliver useful results is rapidly reshaping the way people learn, work, and handle numerous tasks. Yet, for all the convenience and value Generative AI and large language models (LLMs) deliver, they have a problem. Despite delivering text, video, and images that appear accurate and convincing, they sometimes hallucinate. These fabrications--which can range from minor, plausible errors to utterly absurd assertions--are a legitimate cause for concern. At the very least, the resulting misinformation or botched image can rank as mildly amusing or annoying.