Generating High-Quality Datasets for Code Editing via Open-Source Language Models
Zhang, Zekai, Liu, Mingwei, Chen, Zhenxi, Liang, Linxi, Chen, Yuxuan, Ou, Guangsheng, Wang, Yanlin, Li, Dan, Peng, Xin, Zheng, Zibin
–arXiv.org Artificial Intelligence
Code editing plays a vital role in software engineering, requiring developers to adjust existing code according to natural language instructions while keeping functionality intact and avoiding unnecessary modifications. However, commit-based datasets commonly used for this task are often noisy, lack diversity, and fail to reflect the style of real-world edit instructions. To address this, we introduce OpenCodeEdit, an open-source pipeline that leverages multiple LLMs to synthesize realistic code-edit triplets. The pipeline produces both concise "lazy" instructions and more detailed "descriptive" ones, and applies filtering based on diffs and topics to guarantee data quality and variety. Using this process, we construct OCEDataFT, a curated dataset of 20K samples. Fine-tuning three advanced base models on OCEDataFT leads to significant performance boosts on the CanItEdit benchmark, with relative pass@1 improvements ranging from 4.50% to 20.79%. Notably, the resulting models achieve performance close to closed-source systems, narrowing the gap to GPT-4 to just 3.54%, without relying on proprietary resources or manual annotation.
arXiv.org Artificial Intelligence
Oct-8-2025
- Country:
- Asia
- China
- Guangdong Province > Zhuhai (0.05)
- Shanghai > Shanghai (0.04)
- Middle East
- Jordan (0.04)
- UAE > Abu Dhabi Emirate
- Abu Dhabi (0.14)
- Thailand > Bangkok
- Bangkok (0.04)
- China
- Europe > Austria
- Vienna (0.14)
- North America
- Canada > Ontario
- Toronto (0.04)
- Mexico > Mexico City
- Mexico City (0.04)
- United States > New York
- New York County > New York City (0.04)
- Canada > Ontario
- Asia
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Education (0.46)
- Technology: