BaldWhisper: Faster Whisper with Head Shearing and Layer Merging
Sy, Yaya, Cerisara, Christophe, Illina, Irina
–arXiv.org Artificial Intelligence
Pruning large pre-trained transformers for low-resource languages is challenging, as it often requires massive retraining data to recover performance. For instance, Distill-Whisper prunes Whisper by 40% and retrains on 21,000 hours of speech, far beyond what is available for most languages. Can Whisper be made lighter and faster for edge devices in data-scarce settings? Focusing on Bambara with only 32h of speech-to-text data, we propose a new pruning recipe. Instead of vocabulary pruning, which is unsuitable due to frequent code-switching by Bambara speakers, we compress the embeddings with low-rank decomposition and feature distillation. Rather than removing layers, we merge them to limit performance loss. The final model preserves 90% of the original performance while being 48% smaller and 2.15x faster on a MacBook Air M1.
arXiv.org Artificial Intelligence
Oct-13-2025
- Country:
- Africa
- Mali (0.05)
- The Gambia (0.04)
- Europe > France
- Grand Est > Meurthe-et-Moselle > Nancy (0.40)
- North America > United States
- Florida > Miami-Dade County
- Miami (0.04)
- Texas > Travis County
- Austin (0.04)
- Florida > Miami-Dade County
- Africa
- Genre:
- Research Report (0.50)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Natural Language (1.00)
- Speech > Speech Recognition (0.49)
- Information Technology > Artificial Intelligence