Parrot: A Training Pipeline Enhances Both Program CoT and Natural Language CoT for Reasoning
Jin, Senjie, Chen, Lu, Xi, Zhiheng, Wang, Yuhui, Song, Sirui, Zhou, Yuhao, Zhang, Xinbo, Sun, Peng, Lu, Hong, Gui, Tao, Zhang, Qi, Huang, Xuanjing
–arXiv.org Artificial Intelligence
Natural language chain-of-thought (N-CoT) and Program chain-of-thought (P-CoT) have emerged as two primary paradigms for large language models (LLMs) to solve mathematical reasoning problems. Current research typically endeavors to achieve unidirectional enhancement: P-CoT enhanced N-CoT or N-CoT enhanced P-CoT. In this paper, we seek to fully unleash the two paradigms' strengths for mutual enhancement and ultimately achieve simultaneous improvements. We conduct a detailed analysis of the error types across two paradigms, based on which we propose Parrot, a novel training pipeline for mathematical problems: 1) Three target-designed subtasks integrate sequential P-CoT and N-CoT generation. 2) A subtask hybrid training strategy to facilitate natural language semantic transferability. 3) The converted N-CoT auxiliary reward is designed to alleviate the sparse rewards in P-CoT optimization. Extensive experiments demonstrate that Parrot significantly enhances both the performance of N-CoT and P-CoT, especially on N-CoT. Using Parrot SFT, the N-CoT performance of LLaMA2 and CodeLLaMA achieve gains of +21.87 and +21.48 on MathQA over the RL baseline, which is resource-intensive.
arXiv.org Artificial Intelligence
Oct-30-2025
- Country:
- Asia
- Europe > Austria
- Vienna (0.14)
- North America
- Canada > British Columbia
- Vancouver (0.04)
- United States
- California > Los Angeles County
- Long Beach (0.04)
- Florida > Miami-Dade County
- Miami (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- California > Los Angeles County
- Canada > British Columbia
- Genre:
- Research Report (0.82)
- Technology: