Multimodal Reranking for Knowledge-Intensive Visual Question Answering
Wen, Haoyang, Zhuang, Honglei, Zamani, Hamed, Hauptmann, Alexander, Bendersky, Michael
–arXiv.org Artificial Intelligence
Knowledge-intensive visual question answering requires models to effectively use external knowledge to help answer visual questions. A typical pipeline includes a knowledge retriever and an answer generator. However, a retriever that utilizes local information, such as an image patch, may not provide reliable question-candidate relevance scores. Besides, the two-tower architecture also limits the relevance score modeling of a retriever to select top candidates for answer generator reasoning. In this paper, we introduce an additional module, a multi-modal reranker, to improve the ranking quality of knowledge candidates for answer generation. Our reranking module takes multi-modal information from both candidates and questions and performs cross-item interaction for better relevance score modeling. Experiments on OK-VQA and A-OKVQA show that multi-modal reranker from distant supervision provides consistent improvements. We also find a training-testing discrepancy with reranking in answer generation, where performance improves if training knowledge candidates are similar to or noisier than those used in testing.
arXiv.org Artificial Intelligence
Jul-16-2024
- Country:
- Asia > Middle East
- Israel (0.14)
- Europe (0.68)
- North America > United States
- Hawaii (0.14)
- Louisiana (0.14)
- Massachusetts (0.14)
- Washington > King County
- Seattle (0.14)
- Asia > Middle East
- Genre:
- Research Report (0.82)
- Technology: