Shi, Xiaowei
Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision
Xi, Zhiheng, Yang, Dingwen, Huang, Jixuan, Tang, Jiafu, Li, Guanyu, Ding, Yiwen, He, Wei, Hong, Boyang, Do, Shihan, Zhan, Wenyu, Wang, Xiao, Zheng, Rui, Ji, Tao, Shi, Xiaowei, Zhai, Yitao, Weng, Rongxiang, Wang, Jingang, Cai, Xunliang, Gui, Tao, Wu, Zuxuan, Zhang, Qi, Qiu, Xipeng, Huang, Xuanjing, Jiang, Yu-Gang
Training large language models (LLMs) to spend more time thinking and reflection before responding is crucial for effectively solving complex reasoning tasks in fields such as science, coding, and mathematics. However, the effectiveness of mechanisms like self-reflection and self-correction depends on the model's capacity to accurately assess its own performance, which can be limited by factors such as initial accuracy, question difficulty, and the lack of external feedback. In this paper, we delve into a two-player paradigm that separates the roles of reasoning and critique models, where the critique model provides step-level feedback to supervise the reasoning (actor) model during both test-time and training-time. We first propose AutoMathCritique, an automated and scalable framework for collecting critique data, resulting in a dataset of 76, 321 responses paired with step-level feedback. Fine-tuning language models with this dataset enables them to generate natural language feedback for mathematical reasoning. We demonstrate that the critique models consistently improve the actor's performance on difficult queries at test-time, especially when scaling up inference-time computation. Motivated by these findings, we introduce the critique-based supervision to the actor's selftraining process, and propose a critique-in-the-loop self-improvement method. Experiments show that the method improves the actor's exploration efficiency and solution diversity, especially on challenging queries, leading to a stronger reasoning model. Lastly, we take the preliminary step to explore training self-talk reasoning models via critique supervision and showcase their potential.
Mitigating Tail Narrowing in LLM Self-Improvement via Socratic-Guided Sampling
Ding, Yiwen, Xi, Zhiheng, He, Wei, Li, Zhuoyuan, Zhai, Yitao, Shi, Xiaowei, Cai, Xunliang, Gui, Tao, Zhang, Qi, Huang, Xuanjing
Self-improvement methods enable large language models (LLMs) to generate solutions themselves and iteratively train on filtered, high-quality rationales. This process proves effective and reduces the reliance on human supervision in LLMs' reasoning, but the performance soon plateaus. We delve into the process and find that models tend to over-sample on easy queries and under-sample on queries they have yet to master. As iterations proceed, this imbalance in sampling is exacerbated, leading to a long-tail distribution where solutions to difficult queries almost diminish. This phenomenon limits the performance gain of self-improving models. A straightforward solution is brute-force sampling to balance the distribution, which significantly raises computational costs. In this paper, we introduce Guided Self-Improvement (GSI), a strategy aimed at improving the efficiency of sampling challenging heavy-tailed data. It leverages Socratic-style guidance signals to help LLM reasoning with complex queries, reducing the exploration effort and minimizing computational overhead. Experiments on four models across diverse mathematical tasks show that GSI strikes a balance between performance and efficiency, while also being effective on held-out tasks.