Leveraging Cross-Attention Transformer and Multi-Feature Fusion for Cross-Linguistic Speech Emotion Recognition
Zhao, Ruoyu, Jiang, Xiantao, Yu, F. Richard, Leung, Victor C. M., Wang, Tao, Zhang, Shaohu
–arXiv.org Artificial Intelligence
Speech Emotion Recognition (SER) plays a crucial role in enhancing human-computer interaction. Cross-Linguistic SER (CLSER) has been a challenging research problem due to significant variability in linguistic and acoustic features of different languages. In this study, we propose a novel approach HuMP-CAT, which combines HuBERT, MFCC, and prosodic characteristics. These features are fused using a cross-attention transformer (CAT) mechanism during feature extraction. Transfer learning is applied to gain from a source emotional speech dataset to the target corpus for emotion recognition. We use IEMOCAP as the source dataset to train the source model and evaluate the proposed method on seven datasets in five languages (e.g., English, German, Spanish, Italian, and Chinese). We show that, by fine-tuning the source model with a small portion of speech from the target datasets, HuMP-CAT achieves an average accuracy of 78.75% across the seven datasets, with notable performance of 88.69% on EMODB (German language) and 79.48% on EMOVO (Italian language). Our extensive evaluation demonstrates that HuMP-CAT outperforms existing methods across multiple target languages.
arXiv.org Artificial Intelligence
Jan-6-2025
- Country:
- Genre:
- Research Report > New Finding (0.48)
- Industry:
- Health & Medicine (0.93)
- Technology:
- Information Technology
- Artificial Intelligence
- Cognitive Science > Emotion (0.96)
- Machine Learning
- Learning Graphical Models (0.93)
- Neural Networks > Deep Learning (1.00)
- Statistical Learning (1.00)
- Natural Language (0.93)
- Representation & Reasoning (1.00)
- Speech (1.00)
- Data Science > Data Mining (1.00)
- Artificial Intelligence
- Information Technology