SupCLAP: Controlling Optimization Trajectory Drift in Audio-Text Contrastive Learning with Support Vector Regularization
Luo, Jiehui, Yin, Yuguo, Xie, Yuxin, Ru, Jinghan, Zhuang, Xianwei, He, Minghua, Liu, Aofan, Xiong, Zihan, Yang, Dongchao
–arXiv.org Artificial Intelligence
Contrastive language-audio pretraining, which aims to unify multimodal representations in a shared embedding space, serves as a cornerstone for building a wide range of applications, from cross-modal retrieval to cutting-edge multimodal large language models. However, we find that the perpendicular component of the pushing force from negative samples in contrastive learning is a double-edged sword: it contains rich supplementary information from negative samples, yet its unconstrained nature causes optimization trajectory drift and training instability. To address this, we propose Support V ector Regularization (SVR), a method that introduces an auxiliary support vector to control this perpendicular component, aiming to harness its rich information while mitigating the associated trajectory drift. The efficacy of SVR is critically governed by its semantic radius, for which we explore two unsupervised modeling strategies: direct parameterization and an adaptive radius predictor module enhanced with constraints to improve its predicting accuracy. Extensive experimental results demonstrate that our method surpasses widely used baselines like InfoNCE and SigLIP loss across classification, monolingual retrieval, and multilingual retrieval on standard audio-text datasets. Contrastive Language-Audio Pretraining (CLAP) Wu et al. (2023); Ghosh et al. (2025) aims to learn a unified audio-text embedding space by pulling corresponding pairs closer and pushing others apart. This paradigm, which powers applications like cross-modal retrieval Xie et al. (2024) and multimodal LLMs Xue et al. (2024); Lam et al. (2025), has achieved great empirical success. However, standard InfoNCE-based CLAP methods still struggle to learn ideal representations, facing limitations such as poor temporal alignment of audio events Y uan et al. (2024) and inconsistent multilingual alignment Yin et al. (2025). Therefore, achieving optimal alignment between the language and audio representation spaces remains an open challenge. In this paper, we uncover a complex yet overlooked dynamic in the optimization process of standard InfoNCE-based contrastive learning Wu et al. (2021): optimization trajectory drift.
arXiv.org Artificial Intelligence
Sep-26-2025
- Country:
- Asia
- China > Hong Kong (0.04)
- Middle East > Jordan (0.04)
- Asia
- Genre:
- Research Report > New Finding (0.48)