Medical Hallucinations in Foundation Models and Their Impact on Healthcare
Kim, Yubin, Jeong, Hyewon, Chen, Shan, Li, Shuyue Stella, Lu, Mingyu, Alhamoud, Kumail, Mun, Jimin, Grau, Cristina, Jung, Minseok, Gameiro, Rodrigo, Fan, Lizhou, Park, Eugene, Lin, Tristan, Yoon, Joonsik, Yoon, Wonjin, Sap, Maarten, Tsvetkov, Yulia, Liang, Paul, Xu, Xuhai, Liu, Xin, McDuff, Daniel, Lee, Hyeonhoon, Park, Hae Won, Tulebaev, Samir, Breazeal, Cynthia
–arXiv.org Artificial Intelligence
Foundation Models that are capable of processing and generating multi-modal data have transformed AI's role in medicine. However, a key limitation of their reliability is hallucination, where inaccurate or fabricated information can impact clinical decisions and patient safety. We define medical hallucination as any instance in which a model generates misleading medical content. This paper examines the unique characteristics, causes, and implications of medical hallucinations, with a particular focus on how these errors manifest themselves in real-world clinical scenarios. Our contributions include (1) a taxonomy for understanding and addressing medical hallucinations, (2) benchmarking models using medical hallucination dataset and physician-annotated LLM responses to real medical cases, providing direct insight into the clinical impact of hallucinations, and (3) a multi-national clinician survey on their experiences with medical hallucinations. Our results reveal that inference techniques such as Chain-of-Thought (CoT) and Search Augmented Generation can effectively reduce hallucination rates. However, despite these improvements, non-trivial levels of hallucination persist. These findings underscore the ethical and practical imperative for robust detection and mitigation strategies, establishing a foundation for regulatory policies that prioritize patient safety and maintain clinical integrity as AI becomes more integrated into healthcare. The feedback from clinicians highlights the urgent need for not only technical advances but also for clearer ethical and regulatory guidelines to ensure patient safety. A repository organizing the paper resources, summaries, and additional information is available at https://github.com/mitmedialab/medical hallucination.
arXiv.org Artificial Intelligence
Feb-25-2025
- Country:
- Asia (1.00)
- Europe (0.92)
- North America > United States
- Minnesota > Hennepin County > Minneapolis (0.14)
- Genre:
- Overview (1.00)
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Industry:
- Government > Regional Government
- North America Government > United States Government > FDA (0.46)
- Health & Medicine
- Consumer Health (1.00)
- Diagnostic Medicine > Imaging (1.00)
- Government Relations & Public Policy (1.00)
- Health Care Providers & Services (1.00)
- Health Care Technology > Medical Record (0.67)
- Pharmaceuticals & Biotechnology (1.00)
- Therapeutic Area
- Cardiology/Vascular Diseases (1.00)
- Endocrinology (0.67)
- Immunology (1.00)
- Infections and Infectious Diseases (1.00)
- Oncology (0.93)
- Psychiatry/Psychology (0.67)
- Pulmonary/Respiratory Diseases (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government
- Technology:
- Information Technology
- Artificial Intelligence
- Applied AI (1.00)
- Cognitive Science > Problem Solving (0.93)
- Issues > Social & Ethical Issues (1.00)
- Machine Learning > Neural Networks
- Deep Learning (1.00)
- Natural Language
- Chatbot (1.00)
- Generation (1.00)
- Large Language Model (1.00)
- Text Processing (0.92)
- Representation & Reasoning
- Diagnosis (1.00)
- Expert Systems (1.00)
- Vision (0.92)
- Data Science > Data Mining (1.00)
- Artificial Intelligence
- Information Technology