HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding
Chen, Zhaorun, Zhao, Zhuokai, Luo, Hongyin, Yao, Huaxiu, Li, Bo, Zhou, Jiawei
–arXiv.org Artificial Intelligence
While large vision-language models (LVLMs) have demonstrated impressive capabilities in interpreting multi-modal contexts, they invariably suffer from object hallucinations (OH). We introduce HALC, a novel decoding algorithm designed to mitigate OH in LVLMs. HALC leverages distinct fine-grained optimal visual information in vision-language tasks and operates on both local and global contexts simultaneously. Specifically, HALC integrates a robust auto-focal grounding mechanism (locally) to correct hallucinated tokens on the fly, and a specialized beam search algorithm (globally) to significantly reduce OH while preserving text generation quality. Additionally, HALC can be integrated into any LVLMs as a plug-and-play module without extra training. Extensive experimental studies demonstrate the effectiveness of HALC in reducing OH, outperforming state-of-the-arts across four benchmarks.
arXiv.org Artificial Intelligence
Jun-10-2024
- Country:
- Europe > Switzerland
- North America > United States
- Illinois (0.28)
- Genre:
- Research Report > New Finding (0.66)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Natural Language > Large Language Model (0.93)
- Representation & Reasoning > Search (0.87)
- Vision (1.00)
- Information Technology > Artificial Intelligence