SMaLL-100: Introducing Shallow Multilingual Machine Translation Model for Low-Resource Languages
Mohammadshahi, Alireza, Nikoulina, Vassilina, Berard, Alexandre, Brun, Caroline, Henderson, James, Besacier, Laurent
–arXiv.org Artificial Intelligence
In recent years, multilingual machine translation models have achieved promising performance on low-resource language pairs by sharing information between similar languages, thus enabling zero-shot translation. To overcome the "curse of multilinguality", these models often opt for scaling up the number of parameters, which makes their use in resource-constrained environments challenging. We introduce SMaLL-100, a distilled version of the M2M-100 (12B) model, a massively multilingual machine translation model covering 100 languages. We train SMaLL-100 with uniform sampling across all language pairs and therefore focus on preserving the performance of low-resource languages. We evaluate SMaLL-100 on different low-resource benchmarks: FLORES-101, Tatoeba, and TICO-19 and demonstrate that it outperforms previous massively multilingual models of comparable sizes (200-600M) while improving inference latency and memory usage. Additionally, our model achieves comparable results to M2M-100 (1.2B), while being 3.6x smaller and 4.3x faster at inference. Code and pre-trained models: https://github.com/alirezamshi/small100
arXiv.org Artificial Intelligence
Oct-20-2022
- Country:
- Europe (1.00)
- North America > United States
- Minnesota (0.27)
- Genre:
- Research Report (0.40)
- Technology: