Liang, Guangtai
CodeV: Issue Resolving with Visual Data
Zhang, Linhao, Zan, Daoguang, Yang, Quanshun, Huang, Zhirong, Chen, Dong, Shen, Bo, Liu, Tianyu, Gong, Yongshun, Huang, Pengjie, Lu, Xudong, Liang, Guangtai, Cui, Lizhen, Wang, Qianxiang
Large Language Models (LLMs) have advanced rapidly in recent years, with their applications in software engineering expanding to more complex repository-level tasks. GitHub issue resolving is a key challenge among these tasks. While recent approaches have made progress on this task, they focus on textual data within issues, neglecting visual data. However, this visual data is crucial for resolving issues as it conveys additional knowledge that text alone cannot. We propose CodeV, the first approach to leveraging visual data to enhance the issue-resolving capabilities of LLMs. CodeV resolves each issue by following a two-phase process: data processing and patch generation. To evaluate CodeV, we construct a benchmark for visual issue resolving, namely Visual SWE-bench. Through extensive experiments, we demonstrate the effectiveness of CodeV, as well as provide valuable insights into leveraging visual data to resolve GitHub issues.
CodeR: Issue Resolving with Multi-Agent and Task Graphs
Chen, Dong, Lin, Shaoxin, Zeng, Muhan, Zan, Daoguang, Wang, Jian-Gang, Cheshkov, Anton, Sun, Jun, Yu, Hao, Dong, Guoliang, Aliev, Artem, Wang, Jie, Cheng, Xiao, Liang, Guangtai, Ma, Yuchi, Bian, Pan, Xie, Tao, Wang, Qianxiang
The rapidly growing capability of Large Language Models (LLMs) is dramatically reshaping many industries [2, 3, 4]. The most recent release of GPT-4o [5] demonstrates a significant leap in multi-modal capabilities and artificial intelligence (AI)-human interaction, whilst maintaining the same level of text generation, reasoning, and code intelligence as GPT-4-Turbo [6]. LLMs can interact with humans and the world as humans do, it is considered a starting point for LLMs to take over tasks from humans or collaborate naturally with humans. Issue resolving is one of the software engineering tasks experimented with LLMs that is particularly relevant in practice. SWE-bench [1] collects 2,294 real-world issues from 12 popular Python libraries.