MURPHY: Multi-Turn GRPO for Self Correcting Code Generation
Ekbote, Chanakya, Lingam, Vijay, Omidvar-Tehrani, Behrooz, Huan, Jun, Sanghavi, Sujay, Deoras, Anoop, Soatto, Stefano
–arXiv.org Artificial Intelligence
Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a powerful framework for enhancing the reasoning capabilities of large language models (LLMs). However, existing approaches such as Group Relative Policy Optimization (GRPO) and its variants, while effective on reasoning benchmarks, struggle with agentic tasks that require iterative decision-making. We introduce Murphy, a multi-turn reflective optimization framework that extends GRPO by incorporating iterative self-correction during training. By leveraging both quantitative and qualitative execution feedback, Murphy enables models to progressively refine their reasoning across multiple turns. Evaluations on code generation benchmarks with model families such as Qwen and OLMo show that Murphy consistently improves performance, achieving up to a 8% relative gain in pass@1 over GRPO, on similar compute budgets.
arXiv.org Artificial Intelligence
Nov-12-2025
- Country:
- Asia > Middle East
- Iran > Tehran Province
- Tehran (0.04)
- Jordan (0.04)
- Iran > Tehran Province
- Europe > Italy
- Calabria > Catanzaro Province > Catanzaro (0.04)
- North America > United States
- Massachusetts > Middlesex County > Cambridge (0.04)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.93)
- Technology: