Reviews: Unified Language Model Pre-training for Natural Language Understanding and Generation

Neural Information Processing Systems 

This paper provides a method to pretrain a single Transformer architecture on three objectives: (i) unidirectional language model (e.g. This unified architecture circumvents the shortcoming of both models like BERT (which can condition on bidirectional context, but harder to use for downstream tasks that involve generation due to bidirectionality) and GPT-2 (easy to apply for generation tasks since it works left-to-right, but bidirectional encoders have been known to work much better than unidirectional ones in sequence-to-sequence models), and thereby combines the best of both worlds. This is done using a simple masking scheme that restricts which words the model can pay attention to, depending on which objective function is used (e.g. if using a unidirectional, left-to-right objective, then all tokens to the right of the target word are masked out). Experiments on text summarisation (CNN/DailyMail and Gigaword), question answering (SQuAD, CoQA extractive, and CoQA abstractive), question generation, and GLUE indicate that the proposed pretraining approach largely matches or surpasses the current state of the art. Their masking approach crucially enables pretraining the two key ingredients of sequence-to-sequence models with a single model: (i) a bidirectional encoder, and (ii) a unidirectional decoder.