Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment
–Neural Information Processing Systems
Such reward model serves as a proxy to human preference, and it is critical to guide the RL step towards improving the model quality. In this work, we argue that the SFT stage significantly benefits from learning a reward model as well. Instead of using the human demonstration data directly via supervised learning, we propose to leverage an Inverse Reinforcement Learning (IRL) technique to simultaneously build an reward model and a policy model. This approach leads to new SFT algorithms that are not only efficient to implement, but are robust to the presence of low-quality supervised learning data. Moreover, we discover a connection between the proposed IRL based approach, and a recent line of works called Self-Play Fine-tune (SPIN, Chen et al. [2024]).
Neural Information Processing Systems
Feb-18-2026, 10:41:13 GMT
- Country:
- Asia > China
- Hong Kong (0.04)
- North America > United States
- Illinois > Cook County
- Chicago (0.04)
- Minnesota > Hennepin County
- Minneapolis (0.28)
- Texas > Brazos County
- College Station (0.14)
- Illinois > Cook County
- Asia > China
- Genre:
- Research Report > Experimental Study (0.93)
- Technology: