Unsupervised Neural Machine Translation with Generative Language Models Only

Han, Jesse Michael, Babuschkin, Igor, Edwards, Harrison, Neelakantan, Arvind, Xu, Tao, Polu, Stanislas, Ray, Alex, Shyam, Pranav, Ramesh, Aditya, Radford, Alec, Sutskever, Ilya

arXiv.org Artificial Intelligence 

We show how to derive state-of-the-art unsupervised neural machine translation systems from generatively pre-trained language models. Our method consists of three steps: few-shot amplification, distillation, and backtranslation. We first use the zero-shot translation ability of large pre-trained language models to generate translations for a small set of unlabeled sentences. We then amplify these zero-shot translations by using them as few-shot demonstrations for sampling a larger synthetic dataset. This dataset is distilled by discarding the few-shot demonstrations and then fine-tuning. During backtranslation, we repeatedly generate translations for a set of inputs and then fine-tune a single language model on both directions of the translation task at once, ensuring cycle-consistency by swapping the roles of gold monotext and generated translations when fine-tuning. By using our method to leverage GPT-3's zero-shot translation capability, we achieve a new state-of-the-art in unsupervised translation on the WMT14 English-French benchmark, attaining a BLEU score of 42.1.