Gramian Multimodal Representation Learning and Alignment

Cicchetti, Giordano, Grassucci, Eleonora, Sigillo, Luigi, Comminiello, Danilo

arXiv.org Artificial Intelligence 

While recent multimodal models have achieved significant progress by aligning pairs of modalities via contrastive learning, their solutions are unsuitable when scaling to multiple modalities. These models typically align each modality to a designated anchor without ensuring the alignment of all modalities with each other, leading to suboptimal performance in tasks requiring a joint understanding of multiple modalities. In this paper, we structurally rethink the pairwise conventional approach to multimodal learning and we present the novel Gramian Representation Alignment Measure (GRAM), which overcomes the above-mentioned limitations. GRAM learns and then aligns n modalities directly in the higher-dimensional space in which modality embeddings lie by minimizing the Gramian volume of the k-dimensional parallelotope spanned by the modality vectors, ensuring the geometric alignment of all modalities simultaneously. GRAM can replace cosine similarity in any downstream method, holding for 2 to n modality and providing more meaningful alignment with respect to previous similarity measures. The novel GRAM-based contrastive loss function enhances the alignment of multimodal models in the higher-dimensional embedding space, leading to new stateof-the-art performance in downstream tasks such as video-audio-text retrieval and audio-video classification. The project page, the code and the pretrained models are available at https://ispamm.github.io/GRAM/. Humans naturally process and integrate signals from multiple sensory modalities, such as sounds and visual inputs, to form a coherent understanding of the world around them. Inspired by this, foundational models have attempted to replicate this capability by aligning pairs of modalities, such as vision and language, through contrastive learning techniques. One of the most significant contributions in this domain was CLIP Radford et al. (2021), which used a contrastive loss to align image and text representations based on cosine similarity. CLIP has shaped the current approach to multimodal learning, and every subsequent model relies on the same contrastive-pairs fashion, even in the case of models involving more than two modalities, such as ImageBind Girdhar et al. (2023), VAST Chen et al. (2023c), and LanguageBind Zhu et al. (2024).