Mitigating Hallucination in Large Vision-Language Models via Adaptive Attention Calibration
Fazli, Mehrdad, Wei, Bowen, Sari, Ahmet, Zhu, Ziwei
–arXiv.org Artificial Intelligence
Large vision-language models (LVLMs) achieve impressive performance on multimodal tasks but often suffer from hallucination, and confidently describe objects or attributes not present in the image. Current training-free interventions struggle to maintain accuracy in open-ended and long-form generation scenarios. We introduce the Confidence-Aware Attention Calibration (CAAC) framework to address this challenge by targeting two key biases: spatial perception bias, which distributes attention disproportionately across image tokens, and modality bias, which shifts focus from visual to textual inputs over time. CAAC employs a two-step approach: Visual-Token Calibration (VTC) to balance attention across visual tokens, and Adaptive Attention Re-Scaling (AAR) to reinforce visual grounding guided by the model's confidence. This confidence-driven adjustment ensures consistent visual alignment during generation. Experiments on CHAIR, AMBER, and POPE benchmarks demonstrate that CAAC outperforms baselines, particularly in long-form generations, effectively reducing hallucination.
arXiv.org Artificial Intelligence
Aug-13-2025
- Country:
- North America
- Canada
- British Columbia > Vancouver (0.04)
- Quebec > Montreal (0.04)
- United States > Florida
- Miami-Dade County > Miami (0.04)
- Canada
- North America
- Genre:
- Research Report (0.64)
- Industry:
- Health & Medicine (0.48)
- Technology:
- Information Technology > Artificial Intelligence
- Natural Language (1.00)
- Vision (1.00)
- Information Technology > Artificial Intelligence