Mitigating Object Hallucination via Robust Local Perception Search
Gao, Zixian, Yang, Chao, Zhou, Zhanhui, Xu, Xing, Lu, Chaochao
–arXiv.org Artificial Intelligence
Recent advancements in Multimodal Large Language Models (MLLMs) have enabled them to effectively integrate vision and language, addressing a variety of downstream tasks. However, despite their significant success, these models still exhibit hallucination phenomena, where the outputs appear plausible but do not align with the content of the images. To mitigate this issue, we introduce Local Perception Search (LPS), a decoding method during inference that is both simple and training-free, yet effectively suppresses hallucinations. This method leverages local visual prior information as a value function to correct the decoding process. Additionally, we observe that the impact of the local visual prior on model performance is more pronounced in scenarios with high levels of image noise. Notably, LPS is a plug-and-play approach that is compatible with various models. Extensive experiments on widely used hallucination benchmarks and noisy data demonstrate that LPS significantly reduces the incidence of hallucinations compared to the baseline, showing exceptional performance, particularly in noisy settings.
arXiv.org Artificial Intelligence
Jun-10-2025
- Country:
- Asia > China
- Guangxi Province > Nanning (0.04)
- Shanghai > Shanghai (0.04)
- Europe > Switzerland
- Asia > China
- Genre:
- Research Report > New Finding (0.68)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Natural Language > Large Language Model (0.71)
- Representation & Reasoning (0.94)
- Vision (1.00)
- Information Technology > Artificial Intelligence