Whispering in Amharic: Fine-tuning Whisper for Low-resource Language

Gete, Dawit Ketema, Ahamed, Bedru Yimam, Belay, Tadesse Destaw, Ejigu, Yohannes Ayana, Imam, Sukairaj Hafiz, Tessema, Alemu Belay, Adem, Mohammed Oumer, Belay, Tadesse Amare, Geislinger, Robert, Musa, Umma Aliyu, Semmann, Martin, Muhammad, Shamsuddeen Hassan, Schreiber, Henning, Yimam, Seid Muhie

arXiv.org Artificial Intelligence 

This work explores fine-tuning OpenAI's Whisper automatic speech recognition (ASR) model for Amharic, a low-resource language, to improve transcription accuracy. While the foundational Whisper model struggles with Amharic due to limited representation in its training data, we fine-tune it using datasets like Mozilla Common Voice, FLEURS, and the BDU-speech dataset. The best-performing model, Whispersmall-am, significantly improves when finetuned on a mix of existing FLEURS data and new, unseen Amharic datasets. Training solely on new data leads to poor performance, but combining it with FLEURS data reinforces the model, enabling better specialization in Amharic. We also demonstrate that normalizing Amharic homophones significantly enhances Word Error Rate (WER) and Bilingual Evaluation Understudy (BLEU) scores. This study underscores the importance of fine-tuning strategies and dataset composition for improving ASR in low-resource languages, providing insights for future Amharic speech recognition research.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found