Iteratively Prompt Pre-trained Language Models for Chain of Thought
Wang, Boshi, Deng, Xiang, Sun, Huan
–arXiv.org Artificial Intelligence
While Pre-trained Language Models (PLMs) internalize a great amount of world knowledge, they have been shown incapable of recalling these knowledge to solve tasks requiring complex & multi-step reasoning. Similar to how humans develop a "chain of thought" for these tasks, how can we equip PLMs with such abilities? In this work, we explore an iterative prompting framework, a new prompting paradigm which progressively elicits relevant knowledge from PLMs for multi-step inference. We identify key limitations of existing prompting methods, namely they are either restricted to queries with a single identifiable relation/predicate, or being agnostic to input contexts, which makes it difficult to capture variabilities across different inference steps. We propose an iterative context-aware prompter, which addresses these limitations by learning to dynamically synthesize prompts conditioned on the current step's contexts. Experiments on three datasets involving multi-step reasoning show the effectiveness of the iterative scheme and the context-aware prompter design.
arXiv.org Artificial Intelligence
Oct-23-2022
- Country:
- Asia
- China > Hong Kong (0.04)
- Middle East > Jordan (0.04)
- Europe
- North America
- Cuba > La Habana Province
- Havana (0.04)
- United States
- Idaho (0.04)
- Ohio > Franklin County
- Columbus (0.04)
- Cuba > La Habana Province
- Oceania > Australia
- Asia
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Leisure & Entertainment (0.46)
- Technology: