Exploiting Domain-Specific Parallel Data on Multilingual Language Models for Low-resource Language Translation
Ranathungaa, Surangika, Nayak, Shravan, Huang, Shih-Ting Cindy, Mao, Yanke, Su, Tong, Chan, Yun-Hsiang Ray, Yuan, Songchen, Rinaldi, Anthony, Lee, Annie En-Shiun
–arXiv.org Artificial Intelligence
Neural Machine Translation (NMT) systems built on multilingual sequence-to-sequence Language Models (msLMs) fail to deliver expected results when the amount of parallel data for a language, as well as the language's representation in the model are limited. This restricts the capabilities of domain-specific NMT systems for low-resource languages (LRLs). As a solution, parallel data from auxiliary domains can be used either to fine-tune or to further pre-train the msLM. We present an evaluation of the effectiveness of these two techniques in the context of domain-specific LRL-NMT. We also explore the impact of domain divergence on NMT model performance. We recommend several strategies for utilizing auxiliary parallel data in building domain-specific NMT models for LRLs.
arXiv.org Artificial Intelligence
Dec-27-2024
- Country:
- Asia > India (0.45)
- Europe (0.67)
- North America > Canada (0.27)
- Genre:
- Research Report (1.00)
- Industry:
- Government (0.45)
- Technology: