Domain Adaptation Method and Modality Gap Impact in Audio-Text Models for Prototypical Sound Classification
Acevedo, Emiliano, Rocamora, Martín, Fuentes, Magdalena
–arXiv.org Artificial Intelligence
Audio-text models are widely used in zero-shot environmental sound classification as they alleviate the need for annotated data. However, we show that their performance severely drops in the presence of background sound sources. Our analysis reveals that this degradation is primarily driven by SNR levels of background soundscapes, and independent of background type. To address this, we propose a novel method that quantifies and integrates the contribution of background sources into the classification process, improving performance without requiring model retraining. Our domain adaptation technique enhances accuracy across various backgrounds and SNR conditions. Moreover, we analyze the modality gap between audio and text embeddings, showing that narrowing this gap improves classification performance.
arXiv.org Artificial Intelligence
Jun-6-2025
- Country:
- Europe
- North America > United States (0.05)
- South America > Uruguay (0.04)
- Genre:
- Research Report (1.00)
- Technology: