Automated Proof Generation for Rust Code via Self-Evolution
Chen, Tianyu, Lu, Shuai, Lu, Shan, Gong, Yeyun, Yang, Chenyuan, Li, Xuheng, Misu, Md Rakib Hossain, Yu, Hao, Duan, Nan, Cheng, Peng, Yang, Fan, Lahiri, Shuvendu K, Xie, Tao, Zhou, Lidong
–arXiv.org Artificial Intelligence
Ensuring correctness is crucial for code generation. Formal verification offers a definitive assurance of correctness, but demands substantial human effort in proof construction and hence raises a pressing need for automation. The primary obstacle lies in the severe lack of data -- there is much less proof than code for LLMs to train upon. In this paper, we introduce SAFE, a novel framework that overcomes the lack of human-written proof to enable automated proof generation of Rust code. SAFE establishes a self-evolving cycle where data synthesis and fine-tuning collaborate to enhance the model capability, leveraging the definitive power of a symbolic verifier in telling correct proof from incorrect ones. SAFE also re-purposes the large number of synthesized incorrect proofs to train the selfdebugging capability of the fine-tuned models, empowering them to fix incorrect proofs based on the verifier's feedback. SAFE demonstrates superior efficiency and precision compared to GPT-4o. Through tens of thousands of synthesized proofs and the self-debugging mechanism, we improve the capability of opensource models, initially unacquainted with formal verification, to automatically write proof for Rust code. This advancement leads to a significant improvement in performance, achieving a 70.50% accuracy rate in a benchmark crafted by human experts, a significant leap over GPT-4o's performance of 24.46%. Large Language Models (LLMs) have recently exhibited impressive capabilities in code generation (Roziere et al., 2023; Guo et al., 2024; Lozhkov et al., 2024; Google, 2024). However, the correctness of generated code cannot be guaranteed. To tackle this issue, prior research (Chen et al., 2022; Zhang et al., 2024a; Huang et al., 2023) has explored assessing generated code by test cases, which sometimes are also generated by LLMs. Although helpful, testing cannot cover all possible program inputs; they can reveal the presence of bugs but cannot prove their absence (Dahl et al., 1972).
arXiv.org Artificial Intelligence
Oct-21-2024
- Country:
- Europe (0.67)
- North America > United States (0.28)
- Genre:
- Research Report > Experimental Study (0.47)
- Industry:
- Information Technology > Security & Privacy (0.46)
- Technology: