Conscious Gaze: Adaptive Attention Mechanisms for Hallucination Mitigation in Vision-Language Models
Bu, Weijue, Yuan, Guan, Zhang, Guixian
–arXiv.org Artificial Intelligence
Abstract--Large Vision-Language Models (VLMs) often exhibit text inertia, where attention drifts from visual evidence toward linguistic priors, resulting in object hallucinations. Existing decoding strategies intervene only at the output logits and thus cannot correct internal reasoning drift, while recent internal-control methods based on heuristic head suppression or global steering vectors lack principled grounding. We introduce Conscious Gaze (CG-VLM), a training-free, inference-time framework that converts game-theoretic interpretability into actionable decoding control. A Cognitive Demand Sensor built on Harsanyi interactions estimates instantaneous vision-text synergy and identifies moments when visual grounding is necessary. CG-VLM achieves state-of-the-art results on POPE and CHAIR across InstructBLIP, LLaV A, Qwen-VL, and mPLUG, while preserving general capabilities, demonstrating that token-level sensing enables precise, context-aware intervention without compromising foundational knowledge.
arXiv.org Artificial Intelligence
Dec-8-2025
- Country:
- Asia > China > Jiangsu Province > Xuzhou (0.04)
- Genre:
- Research Report (0.50)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks (0.47)
- Natural Language (1.00)
- Vision (1.00)
- Information Technology > Artificial Intelligence