Understanding Multimodal Contrastive Learning Through Pointwise Mutual Information
Uesaka, Toshimitsu, Suzuki, Taiji, Takida, Yuhta, Lai, Chieh-Hsin, Murata, Naoki, Mitsufuji, Yuki
–arXiv.org Artificial Intelligence
Multimodal representation learning to integrate different modalities, such as text, vision, and audio is important for real-world applications. The symmetric InfoNCE loss proposed in CLIP is a key concept in multimodal representation learning. In this work, we provide a theoretical understanding of the symmetric InfoNCE loss through the lens of the pointwise mutual information and show that encoders that achieve the optimal similarity in the pretraining provide a good representation for downstream classification tasks under mild assumptions. Based on our theoretical results, we also propose a new similarity metric for multimodal contrastive learning by utilizing a nonlinear kernel to enrich the capability. To verify the effectiveness of the proposed method, we demonstrate pretraining of multimodal representation models on the Conceptual Caption datasets and evaluate zero-shot classification and linear classification on common benchmark datasets. CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) established one of the most common frameworks for multimodal representation learning (Guo et al., 2019). In this framework, to obtain the vision-language representation, two encoders that map inputs from different modalities onto a shared space are trained with a contrastive loss (Chopra et al., 2005). Recent studies have shown that a CLIP model pretrained on a large-scale text-image dataset provides transferable features to various downstream tasks such as linear classification (Radford et al., 2021; Jia et al., 2021), text-to-video retrieval (Lin et al., 2022), text-conditioned image generation (Ramesh et al., 2022), and manipulation (Patashnik et al., 2021). Recent works have shown that a CLIP model can be used to feed vision information to large language models (Alayrac et al., 2022). In addition to the text and vision modalities, this multimodal contrastive learning framework can be applied to other combinations of modalities such as text-audio representations (Elizalde et al., 2023).
arXiv.org Artificial Intelligence
Apr-29-2024