PanGu-Coder2: Boosting Large Language Models for Code with Ranking Feedback

Shen, Bo, Zhang, Jiaxin, Chen, Taihong, Zan, Daoguang, Geng, Bing, Fu, An, Zeng, Muhan, Yu, Ailun, Ji, Jichuan, Zhao, Jingyang, Guo, Yuenan, Wang, Qianxiang

arXiv.org Artificial Intelligence 

Large Language Models for Code (Code LLM) are flourishing. New and powerful models are released on a weekly basis, demonstrating remarkable performance on the code generation task. Various approaches have been proposed to boost the code generation performance of pre-trained Code LLMs, such as supervised fine-tuning, instruction tuning, reinforcement learning, etc. In this paper, we propose a novel RRTF (Rank Responses to align Test&Teacher Feedback) framework, which can effectively and efficiently boost pre-trained large language models for code generation. Under this framework, we present PanGu-Coder2, which achieves 62.20% pass@1 on the OpenAI HumanEval benchmark. Furthermore, through an extensive evaluation on CoderEval and LeetCode benchmarks, we show that PanGu-Coder2 consistently outperforms all previous Code LLMs.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found