HICD: Hallucination-Inducing via Attention Dispersion for Contrastive Decoding to Mitigate Hallucinations in Large Language Models
Jiang, Xinyan, Ye, Hang, Zhu, Yongxin, Zheng, Xiaoying, Chen, Zikang, Gong, Jun
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) often generate hallucinations, producing outputs that are contextually inaccurate or factually incorrect. We introduce HICD, a novel method designed to induce hallucinations for contrastive decoding to mitigate hallucinations. Unlike existing contrastive decoding methods, HICD selects attention heads crucial to the model's prediction as inducing heads, then induces hallucinations by dispersing attention of these inducing heads and compares the hallucinated outputs with the original outputs to obtain the final result. Our approach significantly improves performance on tasks requiring contextual faithfulness, such as context completion, reading comprehension, and question answering. It also improves factuality in tasks requiring accurate knowledge recall. We demonstrate that our inducing heads selection and attention dispersion method leads to more "contrast-effective" hallucinations for contrastive decoding, outperforming other hallucination-inducing methods. Our findings provide a promising strategy for reducing hallucinations by inducing hallucinations in a controlled manner, enhancing the performance of LLMs in a wide range of tasks.
arXiv.org Artificial Intelligence
Mar-17-2025
- Country:
- Europe > Middle East
- Malta (0.14)
- North America
- Mexico > Mexico City (0.14)
- United States (0.28)
- Europe > Middle East
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Consumer Products & Services > Restaurants (0.46)
- Education (0.88)
- Technology: