Relevance-guided Neural Machine Translation

Tourni, Isidora Chara, Wijaya, Derry

arXiv.org Artificial Intelligence 

LRP was introduced by Bach et al. (2015), Explanations & Explanation-guided training Unsupervised Neural Machine Translation Several previous works outline and summarize (UNMT) has seen remarkable progress in recent the findings of explainability and interpetabilityrelated years, with a very large number of methods research in NLP (Belinkov et al., 2020; Sun proposed aiming to NMT when parallel data are et al., 2021b; Tenney et al., 2020; Madsen et al., few or non-existent for certain language pairs 2021; Danilevsky et al., 2020; Qian et al., 2021). Training particular interest, and the focus of our work, are techniques such as Back-Translation (Sennrich those that along with measuring feature importance et al., 2015) and Auto-Encoding have been widely and distinguishing relevant from irrelevant features, studied, in order to efficiently train NMT models are utilized to augment the intermediate learned under those data scarcity conditions to obtain high features, and improve model performance or quality translation results.