On the Inconsistencies of Conditionals Learned by Masked Language Models

Young, Tom, You, Yang

arXiv.org Artificial Intelligence 

Learning to predict masked tokens in a sequence has been shown to be a powerful pretraining objective for large language models. After training, such masked language models can provide distributions of tokens conditioned on bidirectional context. In this paper, we show that contrary to popular assumptions, such bidirectional conditionals often demonstrate considerable inconsistencies, i.e., they cannot be derived from a coherent joint distribution when considered together. We empirically quantify such inconsistencies in the simple scenario of bigram comparison for two common styles of masked language models: T5-style and BERT-style. For example, we show that T5 models often confuse their own preference regarding two similar bigrams. We show that inconsistencies exist ubiquitously in masked language models of diverse sizes and configurations, from RoBERTa-base to GLM-130B. As an initial attempt to address this issue during the inference phase, we propose Ensemble of Conditionals, a self-ensemble algorithm that jointly considers many inconsistent conditionals directly produced by the MLM to synthesize a distribution that is used as the model's final output. Such ensembling improves open-source SOTA results on LAMBADA.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found