DiffGAP: A Lightweight Diffusion Module in Contrastive Space for Bridging Cross-Model Gap
Mo, Shentong, Chen, Zehua, Bao, Fan, Zhu, Jun
–arXiv.org Artificial Intelligence
Recent works in cross-modal understanding and generation, notably through models like CLAP (Contrastive Language-Audio Pretraining) and CAVP (Contrastive Audio-Visual Pretraining), have significantly enhanced the alignment of text, video, and audio embeddings via a single contrastive loss. However, these methods often overlook the bidirectional interactions and inherent noises present in each modality, which can crucially impact the quality and efficacy of cross-modal integration. To address this limitation, we introduce DiffGAP, a novel approach incorporating a lightweight generative module within the contrastive space. Specifically, our DiffGAP employs a bidirectional diffusion process tailored to bridge the cross-modal gap more effectively. This involves a denoising process on text and video embeddings conditioned on audio embeddings and vice versa, thus facilitating a more nuanced and robust cross-modal interaction. Our experimental results on VGGSound and AudioCaps datasets demonstrate that DiffGAP significantly improves performance in video/text-audio generation and retrieval tasks, confirming its effectiveness in enhancing cross-modal understanding and generation capabilities.
arXiv.org Artificial Intelligence
Mar-15-2025
- Country:
- Asia
- China (0.14)
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.14)
- Asia
- Genre:
- Research Report (0.84)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Natural Language (1.00)
- Vision (0.95)
- Information Technology > Artificial Intelligence