Exploiting Domain-Specific Parallel Data on Multilingual Language Models for Low-resource Language Translation
Ranathungaa, Surangika, Nayak, Shravan, Huang, Shih-Ting Cindy, Mao, Yanke, Su, Tong, Chan, Yun-Hsiang Ray, Yuan, Songchen, Rinaldi, Anthony, Lee, Annie En-Shiun
–arXiv.org Artificial Intelligence
Neural Machine Translation (NMT) systems built on multilingual sequence-to-sequence Language Models (msLMs) fail to deliver expected results when the amount of parallel data for a language, as well as the language's representation in the model are limited. This restricts the capabilities of domain-specific NMT systems for low-resource languages (LRLs). As a solution, parallel data from auxiliary domains can be used either to fine-tune or to further pre-train the msLM. We present an evaluation of the effectiveness of these two techniques in the context of domain-specific LRL-NMT. We also explore the impact of domain divergence on NMT model performance. We recommend several strategies for utilizing auxiliary parallel data in building domain-specific NMT models for LRLs.
arXiv.org Artificial Intelligence
Dec-27-2024
- Country:
- Asia
- Europe
- Belgium > Brussels-Capital Region
- Brussels (0.04)
- Denmark > Capital Region
- Copenhagen (0.04)
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Belgium > Brussels-Capital Region
- North America
- Canada
- United States > Washington
- King County > Seattle (0.04)
- Oceania > New Zealand
- North Island > Auckland Region > Auckland (0.04)
- Genre:
- Research Report (1.00)
- Technology: