Cheshkov, Anton
Exploring the Potential of Conversational Test Suite Based Program Repair on SWE-bench
Cheshkov, Anton, Zadorozhny, Pavel, Levichev, Rodion, Maslov, Evgeny, Jaldin, Ronaldo Franco
Automatic program repair at project level may open yet to be seen opportunities in various fields of human activity. Since the SWE-Bench challenge was presented, we have seen numerous of solutions. Patch generation is a part of program repair, and test suite-based conversational patch generation has proven its effectiveness. However, the potential of conversational patch generation has not yet specifically estimated on SWE-Bench. This study reports experimental results aimed at evaluating the individual effectiveness of conversational patch generation on problems from SWE-Bench. The experiments show that a simple conversational pipeline based on LLaMA 3.1 70B can generate valid patches in 47\% of cases, which is comparable to the state-of-the-art in program repair on SWE-Bench.
CodeR: Issue Resolving with Multi-Agent and Task Graphs
Chen, Dong, Lin, Shaoxin, Zeng, Muhan, Zan, Daoguang, Wang, Jian-Gang, Cheshkov, Anton, Sun, Jun, Yu, Hao, Dong, Guoliang, Aliev, Artem, Wang, Jie, Cheng, Xiao, Liang, Guangtai, Ma, Yuchi, Bian, Pan, Xie, Tao, Wang, Qianxiang
The rapidly growing capability of Large Language Models (LLMs) is dramatically reshaping many industries [2, 3, 4]. The most recent release of GPT-4o [5] demonstrates a significant leap in multi-modal capabilities and artificial intelligence (AI)-human interaction, whilst maintaining the same level of text generation, reasoning, and code intelligence as GPT-4-Turbo [6]. LLMs can interact with humans and the world as humans do, it is considered a starting point for LLMs to take over tasks from humans or collaborate naturally with humans. Issue resolving is one of the software engineering tasks experimented with LLMs that is particularly relevant in practice. SWE-bench [1] collects 2,294 real-world issues from 12 popular Python libraries.
Finetuning Large Language Models for Vulnerability Detection
Shestov, Alexey, Cheshkov, Anton, Levichev, Rodion, Mussabayev, Ravil, Zadorozhny, Pavel, Maslov, Evgeny, Vadim, Chibirev, Bulychev, Egor
This paper presents the results of finetuning large language models (LLMs) for the task of detecting vulnerabilities in source code. We leverage WizardCoder, a recent improvement of the state-of-the-art LLM StarCoder, and adapt it for vulnerability detection through further finetuning. To accelerate training, we modify WizardCoder's training procedure, also we investigate optimal training regimes. For the imbalanced dataset with many more negative examples than positive, we also explore different techniques to improve classification performance. The finetuned WizardCoder model achieves improvement in ROC AUC and F1 measures on balanced and imbalanced vulnerability datasets over CodeBERT-like model, demonstrating the effectiveness of adapting pretrained LLMs for vulnerability detection in source code. The key contributions are finetuning the state-of-the-art code LLM, WizardCoder, increasing its training speed without the performance harm, optimizing the training procedure and regimes, handling class imbalance, and improving performance on difficult vulnerability detection datasets. This demonstrates the potential for transfer learning by finetuning large pretrained language models for specialized source code analysis tasks.
Evaluation of ChatGPT Model for Vulnerability Detection
Cheshkov, Anton, Zadorozhny, Pavel, Levichev, Rodion
In this technical report, we evaluated the performance of the ChatGPT and GPT-3 models for the task of vulnerability detection in code. Our evaluation was conducted on our real-world dataset, using binary and multi-label classification tasks on CWE vulnerabilities. We decided to evaluate the model because it has shown good performance on other code-based tasks, such as solving programming challenges and understanding code at a high level. However, we found that the ChatGPT model performed no better than a dummy classifier for both binary and multi-label classification tasks for code vulnerability detection.