CodeRepoQA: A Large-scale Benchmark for Software Engineering Question Answering
Hu, Ruida, Peng, Chao, Ren, Jingyi, Jiang, Bo, Meng, Xiangxin, Wu, Qinyun, Gao, Pengfei, Wang, Xinchen, Gao, Cuiyun
–arXiv.org Artificial Intelligence
In this work, we introduce CodeRepoQA, a large-scale benchmark specifically designed for evaluating repository-level question-answering capabilities in the field of software engineering. CodeRepoQA encompasses five programming languages and covers a wide range of scenarios, enabling comprehensive evaluation of language models. To construct this dataset, we crawl data from 30 well-known repositories in GitHub, the largest platform for hosting and collaborating on code, and carefully filter raw data. In total, CodeRepoQA is a multi-turn question-answering benchmark with 585,687 entries, covering a diverse array of software engineering scenarios, with an average of 6.62 dialogue turns per entry. We evaluate ten popular large language models on our dataset and provide in-depth analysis. We find that LLMs still have limitations in question-answering capabilities in the field of software engineering, and medium-length contexts are more conducive to LLMs' performance. The entire benchmark is publicly available at https://github.com/kinesiatricssxilm14/CodeRepoQA.
arXiv.org Artificial Intelligence
Dec-19-2024