Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models
Zheng, Huaixiu Steven, Mishra, Swaroop, Chen, Xinyun, Cheng, Heng-Tze, Chi, Ed H., Le, Quoc V, Zhou, Denny
–arXiv.org Artificial Intelligence
We present Step-Back Prompting, a simple prompting technique that enables LLMs to do abstractions to derive high-level concepts and first principles from instances containing specific details. Using the concepts and principles to guide the reasoning steps, LLMs significantly improve their abilities in following a correct reasoning path towards the solution. We conduct experiments of Step-Back Prompting with PaLM-2L models and observe substantial performance gains on a wide range of challenging reasoning-intensive tasks including STEM, Knowledge QA, and Multi-Hop Reasoning. For instance, Step-Back Prompting improves PaLM-2L performance on MMLU Physics and Chemistry by 7% and 11%, TimeQA by 27%, and MuSiQue by 7%.
arXiv.org Artificial Intelligence
Oct-9-2023
- Country:
- Europe > United Kingdom
- England (0.28)
- North America > United States (1.00)
- Europe > United Kingdom
- Genre:
- Research Report (0.40)
- Workflow (0.47)
- Industry:
- Technology: