Learning More with Less: Self-Supervised Approaches for Low-Resource Speech Emotion Recognition
Gong, Ziwei, Shi, Pengyuan, Donbekci, Kaan, Ai, Lin, Chen, Run, Sasu, David, Wu, Zehui, Hirschberg, Julia
–arXiv.org Artificial Intelligence
Speech Emotion Recognition (SER) has seen significant progress with deep learning, yet remains challenging for Low-Resource Languages (LRLs) due to the scarcity of annotated data. In this work, we explore unsupervised learning to improve SER in low-resource settings. Specifically, we investigate contrastive learning (CL) and Bootstrap Y our Own Latent (BYOL) as self-supervised approaches to enhance cross-lingual generalization. Our methods achieve notable F1 score improvements of 10.6% in Urdu, 15.2% in German, and 13.9% in Bangla, demonstrating their effectiveness in LRLs. Additionally, we analyze model behavior to provide insights on key factors influencing performance across languages, and also highlighting challenges in low-resource SER. This work provides a foundation for developing more inclusive, explainable, and robust emotion recognition systems for underrepresented languages.
arXiv.org Artificial Intelligence
Jun-4-2025
- Country:
- Europe
- Denmark > Capital Region
- Copenhagen (0.04)
- Middle East > Malta (0.04)
- Denmark > Capital Region
- North America
- Mexico > Mexico City
- Mexico City (0.04)
- United States (0.28)
- Mexico > Mexico City
- Europe
- Genre:
- Research Report (0.83)
- Technology:
- Information Technology > Artificial Intelligence
- Cognitive Science > Emotion (0.96)
- Machine Learning
- Neural Networks > Deep Learning (1.00)
- Statistical Learning (1.00)
- Natural Language (1.00)
- Speech > Speech Recognition (0.94)
- Information Technology > Artificial Intelligence