Expand BERT Representation with Visual Information via Grounded Language Learning with Multimodal Partial Alignment
Nguyen, Cong-Duy, Vu-Le, The-Anh, Nguyen, Thong, Quan, Tho, Tuan, Luu Anh
–arXiv.org Artificial Intelligence
Language models have been supervised with both language-only objective and visual grounding in existing studies of visual-grounded language learning. However, due to differences in the distribution and scale of visual-grounded datasets and language corpora, the language model tends to mix up the context of the tokens that occurred in the grounded data with those that do not. As a result, during representation learning, there is a mismatch between the visual information and the contextual meaning of the sentence. To overcome this limitation, we propose GroundedBERT - a grounded language learning method that enhances the BERT representation with visually grounded information. GroundedBERT comprises two components: (i) the original BERT which captures the contextual representation of words learned from the language corpora, and (ii) a visual grounding module which captures visual information learned from visual-grounded datasets. Moreover, we employ Optimal Transport (OT), specifically its partial variant, to solve the fractional alignment problem between the two modalities. Our proposed method significantly outperforms the baseline language models on various language tasks of the GLUE and SQuAD datasets.
arXiv.org Artificial Intelligence
Jan-9-2024
- Country:
- Europe > Switzerland
- North America
- Canada > Quebec (0.14)
- United States
- California (0.14)
- Louisiana (0.14)
- Genre:
- Research Report (0.66)
- Industry:
- Education > Curriculum > Subject-Specific Education (0.82)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks (1.00)
- Natural Language > Text Processing (0.68)
- Vision (1.00)
- Information Technology > Artificial Intelligence