MASS: Overcoming Language Bias in Image-Text Matching
Chung, Jiwan, Lim, Seungwon, Lee, Sangkyu, Yu, Youngjae
–arXiv.org Artificial Intelligence
Pretrained visual-language models have made significant advancements in multimodal tasks, including image-text retrieval. However, a major challenge in image-text matching lies in language bias, where models predominantly rely on language priors and neglect to adequately consider the visual content. We thus present Multimodal ASsociation Score (MASS), a framework that reduces the reliance on language priors for better visual accuracy in image-text matching problems. It can be seamlessly incorporated into existing visual-language models without necessitating additional training. Our experiments have shown that MASS effectively lessens language bias without losing an understanding of linguistic compositionality. Overall, MASS offers a promising solution for enhancing image-text matching performance in visual-language models.
arXiv.org Artificial Intelligence
Jan-20-2025
- Country:
- Asia > South Korea > Seoul > Seoul (0.04)
- Genre:
- Research Report (1.00)
- Industry:
- Leisure & Entertainment > Sports > Tennis (0.46)
- Technology: