Leveraging Audio-Visual Data to Reduce the Multilingual Gap in Self-Supervised Speech Models
Blandón, María Andrea Cruz, Aldeneh, Zakaria, Chi, Jie, de Seyssel, Maureen
–arXiv.org Artificial Intelligence
ABSTRACT Self-supervised learning (SSL) has made significant advances in speech representation learning. Models like wav2vec 2.0 and HuBERT have achieved state-of-the-art results in tasks such as speech recognition, particularly in monolingual settings. However, multilingual SSL models tend to underperform their monolingual counterparts on each individual language, especially in multilingual scenarios with few languages such as the bilingual setting. In this work, we investigate a novel approach to reduce this performance gap by introducing limited visual grounding into bilingual speech SSL models. Our results show that visual grounding benefits both monolingual and bilingual models, with especially pronounced gains for the latter, reducing the multilingual performance gap on zero-shot phonetic discrimination from 31.5% for audio-only models to 8.04% with grounding.
arXiv.org Artificial Intelligence
Sep-23-2025
- Country:
- Asia
- Europe
- Finland > Pirkanmaa
- Tampere (0.04)
- France > Provence-Alpes-Côte d'Azur
- Bouches-du-Rhône > Marseille (0.04)
- Finland > Pirkanmaa
- North America > United States (0.04)
- Genre:
- Research Report > New Finding (0.87)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Inductive Learning (0.35)
- Natural Language (0.90)
- Speech > Speech Recognition (0.49)
- Information Technology > Artificial Intelligence