REVEAL: Retrieval-Augmented Visual-Language Pre-Training with Multi-Source Multimodal Knowledge Memory
Hu, Ziniu, Iscen, Ahmet, Sun, Chen, Wang, Zirui, Chang, Kai-Wei, Sun, Yizhou, Schmid, Cordelia, Ross, David A., Fathi, Alireza
–arXiv.org Artificial Intelligence
In this paper, we propose an end-to-end Retrieval-Augmented Visual Language Model (REVEAL) that learns to encode world knowledge into a large-scale memory, and to retrieve from it to answer knowledge-intensive queries. REVEAL consists of four key components: the memory, the encoder, the retriever and the generator. The large-scale memory encodes various sources of multimodal world knowledge (e.g. image-text pairs, question answering pairs, knowledge graph triplets, etc) via a unified encoder. The retriever finds the most relevant knowledge entries in the memory, and the generator fuses the retrieved knowledge with the input query to produce the output. A key novelty in our approach is that the memory, encoder, retriever and generator are all pre-trained end-to-end on a massive amount of data. Furthermore, our approach can use a diverse set of multimodal knowledge sources, which is shown to result in significant gains. We show that REVEAL achieves state-of-the-art results on visual question answering and image captioning.
arXiv.org Artificial Intelligence
Apr-3-2023
- Country:
- Asia (0.93)
- North America > United States
- California (0.28)
- Minnesota (0.28)
- Genre:
- Research Report > New Finding (0.68)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Natural Language > Large Language Model (0.47)
- Representation & Reasoning (1.00)
- Vision (1.00)
- Information Technology > Artificial Intelligence