Generated Knowledge Prompting for Commonsense Reasoning

Liu, Jiacheng, Liu, Alisa, Lu, Ximing, Welleck, Sean, West, Peter, Bras, Ronan Le, Choi, Yejin, Hajishirzi, Hannaneh

arXiv.org Artificial Intelligence 

It remains an open question whether incorporating external knowledge benefits commonsense reasoning while maintaining the flexibility of pretrained sequence models. To investigate this question, we develop generated knowledge prompting, which consists of generating knowledge from a language model, then providing the knowledge as additional input when answering a question. Our method does not require task-specific supervision for knowledge integration, or access to a structured knowledge base, yet it improves performance Figure 1: Generated knowledge prompting involves of large-scale, state-of-the-art models (i) using few-shot demonstrations to generate questionrelated on four commonsense reasoning tasks, achieving knowledge statements from a language model; state-of-the-art results on numerical commonsense (ii) using a second language model to make predictions (NumerSense), general commonsense with each knowledge statement, then selecting the (CommonsenseQA 2.0), and scientific highest-confidence prediction.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found