Selective Attention Merging for low resource tasks: A case study of Child ASR
Shankar, Natarajan Balaji, Wang, Zilai, Eren, Eray, Alwan, Abeer
–arXiv.org Artificial Intelligence
While Speech Foundation Models (SFMs) excel in various speech tasks, their performance for low-resource tasks such as child Automatic Speech Recognition (ASR) is hampered by limited pretraining data. To address this, we explore different model merging techniques to leverage knowledge from models trained on larger, more diverse speech corpora. This paper also introduces Selective Attention (SA) Merge, a novel method that selectively merges task vectors from attention matrices to enhance SFM performance on low-resource tasks. Experiments on the MyST database show significant reductions in relative word error rate of up to 14%, outperforming existing model merging and data augmentation techniques. By combining data augmentation techniques with SA Merge, we achieve a new state-of-the-art WER of 8.69 on the MyST database for the Whisper-small model, highlighting the potential of SA Merge for improving low-resource ASR.
arXiv.org Artificial Intelligence
Jan-14-2025
- Country:
- North America > United States > California
- Alameda County > Berkeley (0.04)
- Los Angeles County > Los Angeles (0.14)
- North America > United States > California
- Genre:
- Research Report
- New Finding (0.68)
- Promising Solution (0.66)
- Research Report
- Industry:
- Education > Curriculum > Subject-Specific Education (0.46)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Natural Language (1.00)
- Speech > Speech Recognition (0.91)
- Information Technology > Artificial Intelligence