A Practitioner's Guide to Building ASR Models for Low-Resource Languages: A Case Study on Scottish Gaelic
Klejch, Ondřej, Lamb, William, Bell, Peter
–arXiv.org Artificial Intelligence
An effective approach to the development of ASR systems for low-resource languages is to fine-tune an existing multilingual end-to-end model. When the original model has been trained on large quantities of data from many languages, fine-tuning can be effective with limited training data, even when the language in question was not present in the original training data. The fine-tuning approach has been encouraged by the availability of public-domain E2E models and is widely believed to lead to state-of-the-art results. This paper, however, challenges that belief. We show that an approach combining hybrid HMMs with self-supervised models can yield substantially better performance with limited training data. This combination allows better utilisation of all available speech and text data through continued self-supervised pre-training and semi-supervised training. We benchmark our approach on Scottish Gaelic, achieving WER reductions of 32% relative over our best fine-tuned Whisper model.
arXiv.org Artificial Intelligence
Jun-6-2025
- Country:
- Europe > United Kingdom > Scotland > City of Edinburgh > Edinburgh (0.04)
- Genre:
- Research Report (1.00)
- Industry:
- Media (0.46)
- Technology: