Exploring Parameter-Efficient Fine-Tuning and Backtranslation for the WMT 25 General Translation Task
Fujita, Felipe, Takada, Hideyuki
–arXiv.org Artificial Intelligence
In this paper, we explore the effectiveness of combining fine-tuning and backtranslation on a small Japanese corpus for neural machine translation. Starting from a baseline English{\textrightarrow}Japanese model (COMET = 0.460), we first apply backtranslation (BT) using synthetic data generated from monolingual Japanese corpora, yielding a modest increase (COMET = 0.468). Next, we fine-tune (FT) the model on a genuine small parallel dataset drawn from diverse Japanese news and literary corpora, achieving a substantial jump to COMET = 0.589 when using Mistral 7B. Finally, we integrate both backtranslation and fine-tuning{ -- }first augmenting the small dataset with BT generated examples, then adapting via FT{ -- }which further boosts performance to COMET = 0.597. These results demonstrate that, even with limited training data, the synergistic use of backtranslation and targeted fine-tuning on Japanese corpora can significantly enhance translation quality, outperforming each technique in isolation. This approach offers a lightweight yet powerful strategy for improving low-resource language pairs.
arXiv.org Artificial Intelligence
Nov-18-2025
- Country:
- Asia
- Japan > Honshū
- Kansai > Kyoto Prefecture
- Kyoto (0.04)
- Tōhoku (0.04)
- Kansai > Kyoto Prefecture
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.04)
- Singapore (0.04)
- Japan > Honshū
- Europe > Portugal
- North America
- Dominican Republic (0.04)
- United States
- Illinois (0.04)
- Pennsylvania > Philadelphia County
- Philadelphia (0.04)
- Asia
- Genre:
- Research Report > New Finding (0.49)
- Technology: