Reviews: Cross-lingual Language Model Pretraining

Neural Information Processing Systems 

This paper uses three techniques for incorporating multi-lingual (rather than just mono-lingual) information for pretraining contextualised representations: (i) autoregressive language modelling objective (e.g. The methods are evaluated on four tasks: (i) cross-lingual classification (XNLI), (ii) unsupervised machine translation, (iii) supervised machine translation, and (iv) low-resourcce language modelling. These results are important as they showcase the strong benefit of multi-lingual (rather than just mono-lingual) pretraining for multiple important downstream tasks, and achieve new state of the art. Originality: while the methods are not particularly novel (autoregressive and masked language modelling pretraining have both been used before for ELMo and BERT; this work extends these objectives to the multi-lingual case), the performance gains on all four tasks are still very impressive. The empirical results are strong, and the methodology is sound and explained in sufficient technical details. - Clarity: The paper is well-written, makes the connections with the relevant earlier work, and includes important details that can facilitate reproducibility (e.g. the learning rate, number of layers, etc.).