Improving Self-supervised Pre-training using Accent-Specific Codebooks
Prabhu, Darshan, Gupta, Abhishek, Nitsure, Omkar, Jyothi, Preethi, Ganapathy, Sriram
–arXiv.org Artificial Intelligence
Speech accents present a serious challenge to the performance of state-of-the-art end-to-end Automatic Speech Recognition (ASR) systems. Even with self-supervised learning and pre-training of ASR models, accent invariance is seldom achieved. In this work, we propose an accent-aware adaptation technique for self-supervised learning that introduces a trainable set of accent-specific codebooks to the self-supervised architecture. These learnable codebooks enable the model to capture accent specific information during pre-training, that is further refined during ASR finetuning. On the Mozilla Common Voice dataset, our proposed approach outperforms all other accent-adaptation approaches on both seen and unseen English accents, with up to 9% relative reduction in word error rate (WER).
arXiv.org Artificial Intelligence
Jul-4-2024
- Country:
- Asia > India
- Karnataka > Bengaluru (0.04)
- Maharashtra > Mumbai (0.04)
- Europe > United Kingdom
- North America
- Canada (0.05)
- United States (0.04)
- Oceania
- Australia (0.04)
- New Zealand (0.04)
- South America > Colombia
- Meta Department > Villavicencio (0.04)
- Asia > India
- Genre:
- Research Report (0.65)
- Technology: