MLIP: Efficient Multi-Perspective Language-Image Pretraining with Exhaustive Data Utilization
Zhang, Yu, Zhang, Qi, Gong, Zixuan, Shi, Yiwei, Liu, Yepeng, Miao, Duoqian, Liu, Yang, Liu, Ke, Yi, Kun, Fan, Wei, Hu, Liang, Wang, Changwei
–arXiv.org Artificial Intelligence
Contrastive Language-Image Pretraining (CLIP) has achieved remarkable success, leading to rapid advancements in multimodal studies. However, CLIP faces a notable challenge in terms of inefficient data utilization. It relies on a single contrastive supervision for each image-text pair during representation learning, disregarding a substantial amount of valuable information that could offer richer supervision. Additionally, the retention of non-informative tokens leads to increased computational demands and time costs, particularly in CLIP's ViT image encoder. To address these issues, we propose Multi-Perspective Language-Image Pretraining (MLIP). In MLIP, we leverage the frequency transform's sensitivity to both high and low-frequency variations, which complements the spatial domain's sensitivity limited to low-frequency variations only. By incorporating frequency transforms and token-level alignment, we expand CILP's single supervision into multi-domain and multi-level supervision, enabling a more thorough exploration of informative image features. Additionally, we introduce a token merging method guided by comprehensive semantics from the frequency and spatial domains. This allows us to merge tokens to multi-granularity tokens with a controllable compression rate to accelerate CLIP. Extensive experiments validate the effectiveness of our design.
arXiv.org Artificial Intelligence
Jun-4-2024