Let's reward step by step: Step-Level reward model as the Navigators for Reasoning

Ma, Qianli, Zhou, Haotian, Liu, Tingkai, Yuan, Jianbo, Liu, Pengfei, You, Yang, Yang, Hongxia

arXiv.org Artificial Intelligence 

Recent years have seen considerable advancements in multi-step reasoning with Large Language Models (LLMs). The previous studies have elucidated the merits of integrating feedback or search mechanisms during model inference to improve the reasoning accuracy. The Process-Supervised Reward Model (PRM), typically furnishes LLMs with step-by-step feedback during the training phase, akin to Proximal Policy Optimization (PPO) or reject sampling. Our objective is to examine the efficacy of PRM in the inference phase to help discern the optimal solution paths for multi-step tasks such as mathematical reasoning and code generation. To this end, we propose a heuristic greedy search algorithm that employs the step-level feedback from PRM to optimize the reasoning pathways explored by LLMs. This tailored PRM demonstrated enhanced results compared to the Chain of Thought (CoT) on mathematical benchmarks like GSM8K and MATH. Additionally, to explore the versatility of our approach, we develop a novel method to automatically generate step-level reward dataset for coding tasks and observed similar improved performance in the code generation tasks. In the exciting evolution of Large Language Models (LLMs) such as GPT (OpenAI, 2023; Brown et al., 2020), LLaMA (Touvron et al., 2023a;b), OPT (Zhang et al., 2022a), Falcon (Penedo et al., 2023), and PaLM (Anil et al., 2023; Chowdhery et al., 2022), a consistent ability to handle tasks from conversation to text generation has been evident. However, when it comes to reasoning, especially multi-step reasoning, current LLMs, even with sophisticated prompting techniques like the Chain of Thought (CoT)(Wei et al., 2023), are still prone to a cascade of errors in their generation processes. As the number of reasoning steps increases, these LLMs face challenges in providing and integrating effective feedback, resulting in one error leading to another. Achieving a refined multi-step reasoning capability for LLMs can unlock their potential across an even broader array of applications, ranging from complex problem-solving to high-level intellectual tasks.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found