Reinforcement learning fine-tuning of language model for instruction following and math reasoning
–arXiv.org Artificial Intelligence
This study investigates the effectiveness of reinforcement learning (RL) fine-tuning techniques on a compact language model (Qwen2.5-0.5B Base) for two challenging tasks: instruction following and mathematical reasoning. We compare supervised fine-tuning (SFT), Direct Preference Optimization (DPO) using preference-labeled data, and Reinforce Leave-One-Out (RLOO) with reward models. Our experiments show that RLOO with DeBERTa reward modeling achieves the best alignment, while DPO provides strong and consistent results. For math reasoing tasks, synthetic data augmentation and best-of-N sampling with an external verifier significantly improve accuracy, showing the potential of combining fine-tuning with inference-time tools. This study highlights key trade-offs and practical strategies for training lightweight, task-aligned small-scale language models.
arXiv.org Artificial Intelligence
Jul-29-2025
- Country:
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- Genre:
- Research Report > New Finding (0.47)
- Technology: