Compound Tokens: Channel Fusion for Vision-Language Representation Learning
Aladago, Maxwell Mbabilla, Piergiovanni, AJ
–arXiv.org Artificial Intelligence
We present an effective method for fusing visual-and-language representations for several question answering tasks including visual question answering and visual entailment. In contrast to prior works that concatenate unimodal representations or use only cross-attention, we compose multimodal representations via channel fusion. By fusing on the channels, the model is able to more effectively align the tokens compared to standard methods. These multimodal representations, which we call compound tokens are generated with cross-attention transformer layers. First, vision tokens are used as queries to retrieve compatible text tokens through cross-attention. We then chain the vision tokens and the queried text tokens along the channel dimension. We call the resulting representations compound tokens. A second group of compound tokens are generated using an analogous process where the text tokens serve as queries to the cross-attention layer. We demonstrate the effectiveness of compound tokens using an encoder-decoder vision-language model trained end-to-end in the open-vocabulary setting. Compound Tokens achieve highly competitive performance across a range of question answering tasks including GQA, VQA2.0, and SNLI-VE.
arXiv.org Artificial Intelligence
Dec-2-2022
- Country:
- North America > United States (0.28)
- Genre:
- Research Report (0.83)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Neural Networks (0.67)
- Statistical Learning (0.93)
- Natural Language (1.00)
- Vision (1.00)
- Machine Learning
- Information Technology > Artificial Intelligence