LargeLanguageModelsareZero-ShotReasoners

Neural Information Processing Systems 

Notably,chainofthought(CoT)prompting, a recent technique for eliciting complex multi-step reasoning through step-bystep answer examples, achieved the state-of-the-art performances in arithmetics and symbolic reasoning, difficultsystem-2 tasks that do not follow the standard scaling laws for LLMs. While these successes are often attributed to LLMs' ability forfew-shot learning, weshowthatLLMs aredecentzero-shotreasoners by simply adding "Let's think step by step" before each answer.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found