Towards Hierarchical Multi-Step Reward Models for Enhanced Reasoning in Large Language Models

Wang, Teng, Jiang, Zhangyi, He, Zhenqi, Yang, Wenhan, Zheng, Yanan, Li, Zeyu, He, Zifan, Tong, Shenyang, Gong, Hailei

arXiv.org Artificial Intelligence 

Recent studies show that Large Language Models (LLMs) achieve strong reasoning capabilities through supervised fine-tuning or reinforcement learning. However, a key approach, the Process Reward Model (PRM), suffers from reward hacking, making it unreliable in identifying the best intermediate step. In this paper, we propose a novel reward model approach, Hierarchical Reward Model (HRM), which evaluates both individual and consecutive reasoning steps from fine-grained and coarse-grained level. HRM excels in evaluating multi-step reasoning coherence and self-reflection, especially in scenarios where the previous reasoning step is incorrect but the subsequent step successfully identifies and corrects the error. Furthermore, to address the inefficiency of autonomous annotating PRM training data via Monte Carlo Tree Search (MCTS), we introduce a lightweight and effective data augmentation strategy called Hierarchical Node Compression (HNC) based on node merging (combining two consecutive reasoning steps into one step) in the tree structure. By applying HNC to MCTS-generated reasoning trajectories, we enhance the diversity and robustness of HRM training data while introducing controlled noise with minimal computational overhead. Empirical results on the PRM800K dataset demonstrate that HRM, in conjunction with HNC, achieves superior stability and reliability in evaluation compared to PRM. Furthermore, cross-domain evaluations on MATH500 and GSM8K dataset confirm HRM's superior generalization and robustness across diverse reasoning tasks.