Goto

Collaborating Authors

 Mistretta, Marco


Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality Inversion

arXiv.org Artificial Intelligence

Pre-trained multi-modal Vision-Language Models like CLIP are widely used offthe-shelf for a variety of applications. In this paper, we show that the common practice of individually exploiting the text or image encoders of these powerful multi-modal models is highly suboptimal for intra-modal tasks like imageto-image retrieval. We argue that this is inherently due to the CLIP-style intermodal contrastive loss that does not enforce any intra-modal constraints, leading to what we call intra-modal misalignment. To demonstrate this, we leverage two optimization-based modality inversion techniques that map representations from their input modality to the complementary one without any need for auxiliary data or additional trained adapters. We empirically show that, in the intra-modal tasks of image-to-image and text-to-text retrieval, approaching these tasks inter-modally significantly improves performance with respect to intramodal baselines on more than fifteen datasets. Additionally, we demonstrate that approaching a native inter-modal task (e.g. Finally, we show that incorporating an intra-modal term in the pre-training objective or narrowing the modality gap between the text and image feature embedding spaces helps reduce the intra-modal misalignment. In recent years the availability of massive, pre-trained Vision-Language Models (VLMs) has enabled a wide variety of applications ranging from zero-shot image segmentation (Zhou et al., 2022a; Lüddecke & Ecker, 2022) to visual question answering (Song et al., 2022; Parelli et al., 2023). These models are typically composed of independent image and text encoders which are simultaneously trained on massive corpora of image-text pairs to align the text and image embeddings of associated inputs. For example, the Contrastive Language-Image Pre-training (CLIP) model is trained on a corpus of 400M image-text pairs to map inputs from both modalities into a shared embedding space (Radford et al., 2021). CLIP is trained with an inter-modal contrastive loss that aims to maximize the similarity of corresponding image-text samples while minimizing the similarity with all the other examples within a batch. Despite CLIP's shared embedding space, visual and textual features lie in distinct regions. This phenomenon, known as the modality gap (Liang et al., 2022), originates from model initialization, and the inter-modal contrastive loss preserves and worsens it during training. Moreover, we note that CLIP's contrastive training strategy focuses on inter-modal (i.e. Consequently, the intra-image and intra-text similarities between CLIP representations might not faithfully correspond to those of the actual images or texts, as depicted in the left section of Figure 1. We refer to this issue as intra-modal misalignment. A simple experiment aimed at quantifying this problem is presented in Appendix B. These authors contributed equally to this work.


RE-tune: Incremental Fine Tuning of Biomedical Vision-Language Models for Multi-label Chest X-ray Classification

arXiv.org Artificial Intelligence

In this paper we introduce RE-tune, a novel approach for fine-tuning pre-trained Multimodal Biomedical Vision-Language models (VLMs) in Incremental Learning scenarios for multi-label chest disease diagnosis. RE-tune freezes the backbones and only trains simple adaptors on top of the Image and Text encoders of the VLM. By engineering positive and negative text prompts for diseases, we leverage the ability of Large Language Models to steer the training trajectory. We evaluate RE-tune in three realistic incremental learning scenarios: class-incremental, label-incremental, and data-incremental. Our results demonstrate that Biomedical VLMs are natural continual learners and prevent catastrophic forgetting. RE-tune not only achieves accurate multi-label classification results, but also prioritizes patient privacy and it distinguishes itself through exceptional computational efficiency, rendering it highly suitable for broad adoption in real-world healthcare settings.


Improving Zero-shot Generalization of Learned Prompts via Unsupervised Knowledge Distillation

arXiv.org Artificial Intelligence

Vision-Language Models (VLMs) demonstrate remarkable zero-shot generalization to unseen tasks, but fall short of the performance of supervised methods in generalizing to downstream tasks with limited data. Prompt learning is emerging as a parameter-efficient method for adapting VLMs, but state-of-the-art approaches require annotated samples. In this paper we propose a novel approach to prompt learning based on unsupervised knowledge distillation from more powerful models. Our approach, which we call Knowledge Distillation Prompt Learning (KDPL), can be integrated into existing prompt learning techniques and eliminates the need for labeled examples during adaptation. Our experiments on more than ten standard benchmark datasets demonstrate that KDPL is very effective at improving generalization of learned prompts for zero-shot domain generalization, zero-shot cross-dataset generalization, and zero-shot base-to-novel class generalization problems. KDPL requires no ground-truth labels for adaptation, and moreover we show that even in the absence of any knowledge of training class names it can be used to effectively transfer knowledge. The code is publicly available at https://github.com/miccunifi/KDPL.