Causal Graphical Models for Vision-Language Compositional Understanding
Parascandolo, Fiorenzo, Moratelli, Nicholas, Sangineto, Enver, Baraldi, Lorenzo, Cucchiara, Rita
–arXiv.org Artificial Intelligence
Recent work has empirically shown that Vision-Language Models (VLMs) struggle to fully understand the compositional properties of the human language, usually modeling an image caption as a "bag of words". As a result, they perform poorly on compositional tasks, which require a deeper understanding of the different entities of a sentence (subject, verb, etc.) jointly with their mutual relationships in order to be solved. In this paper, we model the dependency relations among textual and visual tokens using a Causal Graphical Model (CGM), built using a dependency parser, and we train a decoder conditioned by the VLM visual encoder. Differently from standard autoregressive or parallel predictions, our decoder's generative process is partially-ordered following the CGM structure. This structure encourages the decoder to learn only the main causal dependencies in a sentence discarding spurious correlations. Using extensive experiments on five compositional benchmarks, we show that our method significantly outperforms all the state-of-the-art compositional approaches by a large margin, and it also improves over methods trained using much larger datasets.
arXiv.org Artificial Intelligence
Dec-12-2024
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe > United Kingdom
- England > Oxfordshire > Oxford (0.04)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.46)
- Technology: