monolingual data
- Asia > China > Beijing > Beijing (0.05)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Netherlands (0.04)
- Asia > Uzbekistan (0.04)
- Information Technology > Artificial Intelligence > Natural Language > Machine Translation (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.71)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.47)
Appendix for Data Diversification: A Simple Strategy For Neural Machine Translation Xuan-Phi Nguyen
Finally, we describe the training setup for our back-translation experiments. We continue to differentiate our method from other existing works. Our method does not train multiple peer models with EM training either. In each round, a forward (or backward) model takes turn to play the "back-translation" role to train The role is switched in the next round. In other words, source and target are identical.
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > Canada (0.04)
- Europe > Germany > Berlin (0.04)
- (4 more...)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > Singapore (0.05)
- North America > United States > Texas > Travis County > Austin (0.04)
- (8 more...)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > Canada (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- (3 more...)
- Europe > France (0.05)
- Oceania > Australia (0.05)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- (3 more...)
Generative Neural Machine Translation
We introduce Generative Neural Machine Translation (GNMT), a latent variable architecture which is designed to model the semantics of the source and target sentences. We modify an encoder-decoder translation model by adding a latent variable as a language agnostic representation which is encouraged to learn the meaning of the sentence. GNMT achieves competitive BLEU scores on pure translation tasks, and is superior when there are missing words in the source sentence. We augment the model to facilitate multilingual translation and semi-supervised learning without adding parameters. This framework significantly reduces over-fitting when there is limited paired data available, and is effective for translating between pairs of languages not seen during training.
- Asia > China > Beijing > Beijing (0.05)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Netherlands (0.04)
- Asia > Uzbekistan (0.04)
- Information Technology > Artificial Intelligence > Natural Language > Machine Translation (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.71)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.47)
Pretraining Strategies using Monolingual and Parallel Data for Low-Resource Machine Translation
Nguefack, Idriss Nguepi, Finkelstein, Mara, Sakayo, Toadoum Sari
This research article examines the effectiveness of various pretraining strategies for developing machine translation models tailored to low-resource languages. Although this work considers several low-resource languages, including Afrikaans, Swahili, and Zulu, the translation model is specifically developed for Lingala, an under-resourced African language, building upon the pretraining approach introduced by Reid and Artetxe (2021), originally designed for high-resource languages. Through a series of comprehensive experiments, we explore different pretraining methodologies, including the integration of multiple languages and the use of both monolingual and parallel data during the pretraining phase. Our findings indicate that pretraining on multiple languages and leveraging both monolingual and parallel data significantly enhance translation quality. This study offers valuable insights into effective pretraining strategies for low-resource machine translation, helping to bridge the performance gap between high-resource and low-resource languages. The results contribute to the broader goal of developing more inclusive and accurate NLP models for marginalized communities and underrepresented populations. The code and datasets used in this study are publicly available to facilitate further research and ensure reproducibility, with the exception of certain data that may no longer be accessible due to changes in public availability.
- Africa > Senegal (0.04)
- Asia > Middle East > Oman (0.04)
Figure 1 Left results of different fine tuning baselines . Right Results of utilizing back translation on baselines
We will follow the suggestions on writing and related works and revise accordingly. We agree with the reviewers' Net (AB-Net FB), the variant that trains AB-Net from scratch (AB-Net SC) and baselines trained with back-translation. English Wikipedia data, which is a subset of the training corpus of BERT. Results are shown in Figure 1. And our method is orthogonal with BT as shown by the Ro-En results in Table 3(a) of the main paper.