Beyond Accuracy: Rethinking Hallucination and Regulatory Response in Generative AI
Li, Zihao, Yi, Weiwei, Chen, Jiahong
–arXiv.org Artificial Intelligence
Hallucination in generative AI is often treated as a technical failure to produce factually correct output. Yet this framing underrepresents the broader significance of hallucinated content in language models, which may appear fluent, persuasive, and contextually appropriate while conveying distortions that escape conventional accuracy checks. This paper critically examines how regulatory and evaluation frameworks have inherited a narrow view of hallucination, one that prioritises surface verifiability over deeper questions of meaning, influence, and impact. We propose a layered approach to understanding hallucination risks, encompassing epistemic instability, user misdirection, and social-scale effects. Drawing on interdisciplinary sources and examining instruments such as the EU AI Act and the GDPR, we show that current governance models struggle to address hallucination when it manifests as ambiguity, bias reinforcement, or normative convergence. Rather than improving factual precision alone, we argue for regulatory responses that account for languages generative nature, the asymmetries between system and user, and the shifting boundaries between information, persuasion, and harm.
arXiv.org Artificial Intelligence
Oct-27-2025
- Country:
- Europe
- Belgium (0.14)
- Netherlands (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Oxfordshire > Oxford (0.04)
- North America
- Canada (0.28)
- Mexico > Mexico City
- Mexico City (0.04)
- United States (0.28)
- Europe
- Genre:
- Research Report (1.00)
- Industry:
- Education (0.93)
- Government > Regional Government (0.93)
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (1.00)
- Law
- Civil Rights & Constitutional Law (0.93)
- Statutes (0.93)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.93)
- Technology:
- Information Technology > Artificial Intelligence
- Issues > Social & Ethical Issues (1.00)
- Machine Learning > Neural Networks
- Deep Learning > Generative AI (0.86)
- Natural Language
- Chatbot (1.00)
- Generation (1.00)
- Large Language Model (1.00)
- Information Technology > Artificial Intelligence