EAGLE: Enhanced Visual Grounding Minimizes Hallucinations in Instructional Multimodal Models
Villa, Andrés, Alcázar, Juan León, Alfarra, Motasem, Araujo, Vladimir, Soto, Alvaro, Ghanem, Bernard
–arXiv.org Artificial Intelligence
Large language models and vision transformers have demonstrated impressive zero-shot capabilities, enabling significant transferability in downstream tasks. The fusion of these models has resulted in multi-modal architectures with enhanced instructional capabilities. Despite incorporating vast image and language pre-training, these multi-modal architectures often generate responses that deviate from the ground truth in the image data. These failure cases are known as hallucinations. Current methods for mitigating hallucinations generally focus on regularizing the language component, improving the fusion module, or ensembling multiple visual encoders to improve visual representation. In this paper, we address the hallucination issue by directly enhancing the capabilities of the visual component. Our approach, named EAGLE, is fully agnostic to the LLM or fusion module and works as a post-pretraining approach that improves the grounding and language alignment of the visual encoder. We show that a straightforward reformulation of the original contrastive pre-training task results in an improved visual encoder that can be incorporated into the instructional multi-modal architecture without additional instructional training. As a result, EAGLE achieves a significant reduction in hallucinations across multiple challenging benchmarks and tasks.
arXiv.org Artificial Intelligence
Jan-5-2025
- Genre:
- Research Report > New Finding (0.46)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Neural Networks > Deep Learning (0.71)
- Performance Analysis > Accuracy (0.95)
- Natural Language > Large Language Model (1.00)
- Vision (1.00)
- Machine Learning
- Information Technology > Artificial Intelligence