Can Hallucination Correction Improve Video-Language Alignment?
Zhao, Lingjun, Xie, Mingyang, Cascante-Bonilla, Paola, Daumé, Hal III, Lee, Kwonjoon
–arXiv.org Artificial Intelligence
Large Vision-Language Models often generate hallucinated content that is not grounded in its visual inputs. While prior work focuses on mitigating hallucinations, we instead explore leveraging hallucination correction as a training objective to improve video-language alignment. We introduce HACA, a self-training framework learning to correct hallucinations in descriptions that do not align with the video content. By identifying and correcting inconsistencies, HACA enhances the model's ability to align video and textual representations for spatio-temporal reasoning. Our experimental results show consistent gains in video-caption binding and text-to-video retrieval tasks, demonstrating that hallucination correction-inspired tasks serve as an effective strategy for improving vision and language alignment.
arXiv.org Artificial Intelligence
Feb-20-2025
- Country:
- North America > United States > Maryland (0.14)
- Genre:
- Research Report > New Finding (0.66)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks
- Deep Learning (0.46)
- Natural Language
- Chatbot (0.46)
- Large Language Model (0.73)
- Representation & Reasoning (1.00)
- Vision (1.00)
- Machine Learning > Neural Networks
- Information Technology > Artificial Intelligence