Fu, Yujia
CS-Bench: A Comprehensive Benchmark for Large Language Models towards Computer Science Mastery
Song, Xiaoshuai, Diao, Muxi, Dong, Guanting, Wang, Zhengyang, Fu, Yujia, Qiao, Runqi, Wang, Zhexu, Fu, Dayuan, Wu, Huangxuan, Liang, Bin, Zeng, Weihao, Wang, Yejie, GongQue, Zhuoma, Yu, Jianing, Tan, Qiuna, Xu, Weiran
Computer Science (CS) stands as a testament to the intricacies of human intelligence, profoundly advancing the development of artificial intelligence and modern society. However, the current community of large language models (LLMs) overly focuses on benchmarks for analyzing specific foundational skills (e.g. mathematics and code generation), neglecting an all-round evaluation of the computer science field. To bridge this gap, we introduce CS-Bench, the first bilingual (Chinese-English) benchmark dedicated to evaluating the performance of LLMs in computer science. CS-Bench comprises approximately 5K meticulously curated test samples, covering 26 subfields across 4 key areas of computer science, encompassing various task forms and divisions of knowledge and reasoning. Utilizing CS-Bench, we conduct a comprehensive evaluation of over 30 mainstream LLMs, revealing the relationship between CS performance and model scales. We also quantitatively analyze the reasons for failures in existing LLMs and highlight directions for improvements, including knowledge supplementation and CS-specific reasoning. Further cross-capability experiments show a high correlation between LLMs' capabilities in computer science and their abilities in mathematics and coding. Moreover, expert LLMs specialized in mathematics and coding also demonstrate strong performances in several CS subfields. Looking ahead, we envision CS-Bench serving as a cornerstone for LLM applications in the CS field and paving new avenues in assessing LLMs' diverse reasoning capabilities. The CS-Bench data and evaluation code are available at https://github.com/csbench/csbench.
Security Code Review by LLMs: A Deep Dive into Responses
Yu, Jiaxin, Liang, Peng, Fu, Yujia, Tahir, Amjed, Shahin, Mojtaba, Wang, Chong, Cai, Yangxiao
Security code review aims to combine automated tools and manual efforts to detect security defects during development. The rapid development of Large Language Models (LLMs) has shown promising potential in software development, as well as opening up new possibilities in automated security code review. To explore the challenges of applying LLMs in practical code review for security defect detection, this study compared the detection performance of three state-of-the-art LLMs (Gemini Pro, GPT-4, and GPT-3.5) under five prompts on 549 code files that contain security defects from real-world code reviews. Through analyzing 82 responses generated by the best-performing LLM-prompt combination based on 100 randomly selected code files, we extracted and categorized quality problems present in these responses into 5 themes and 16 categories. Our results indicate that the responses produced by LLMs often suffer from verbosity, vagueness, and incompleteness, highlighting the necessity to enhance their conciseness, understandability, and compliance to security defect detection. This work reveals the deficiencies of LLM-generated responses in security code review and paves the way for future optimization of LLMs towards this task.
Copilot Refinement: Addressing Code Smells in Copilot-Generated Python Code
Zhang, Beiqi, Liang, Peng, Feng, Qiong, Fu, Yujia, Li, Zengyang
As one of the most popular dynamic languages, Python experiences a decrease in readability and maintainability when code smells are present. Recent advancements in Large Language Models have sparked growing interest in AI-enabled tools for both code generation and refactoring. GitHub Copilot is one such tool that has gained widespread usage. Copilot Chat, released on September 2023, functions as an interactive tool aims at facilitating natural language-powered coding. However, limited attention has been given to understanding code smells in Copilot-generated Python code and Copilot's ability to fix the code smells it generates. To this end, we built a dataset comprising 102 code smells in Copilot-generated Python code. Our aim is to first explore the occurrence of code smells in Copilot-generated Python code and then evaluate the effectiveness of Copilot in fixing these code smells employing different prompts. The results show that 8 out of 10 types of Python smells can be detected in Copilot-generated Python code, among which Multiply-Nested Container is the most common one. For these code smells, Copilot Chat achieves a highest fixing rate of 87.1%, showing promise in fixing Python code smells generated by Copilot itself. Besides, the effectiveness of Copilot Chat in fixing these smells can be improved with the provision of more detailed prompts. However, using Copilot Chat to fix these smells might introduce new code smells.
BotanicGarden: A high-quality and large-scale robot navigation dataset in challenging natural environments
Liu, Yuanzhi, Fu, Yujia, Qin, Minghui, Xu, Yufeng, Xu, Baoxin, Chen, Fengdong, Goossens, Bart, Yu, Hongwei, Liu, Chun, Chen, Long, Tao, Wei, Zhao, Hui
The rapid developments of mobile robotics and autonomous navigation over the years are largely empowered by public datasets for testing and upgrading, such as SLAM and localization tasks. Impressive demos and benchmark results have arisen, indicating the establishment of a mature technical framework. However, from the view point of real-world deployments, there are still critical defects of robustness in challenging environments, especially in large-scale, GNSS-denied, textural-monotonous, and unstructured scenarios. To meet the pressing validation demands in such scope, we build a novel challenging robot navigation dataset in a large botanic garden of more than 48000m2. Comprehensive sensors are employed, including high-res/rate stereo Gray&RGB cameras, rotational and forward 3D LiDARs, and low-cost and industrial-grade IMUs, all of which are well calibrated and accurately hardware-synchronized. An all-terrain wheeled robot is configured to mount the sensor suite and provide odometry data. A total of 32 long and short sequences of 2.3 million images are collected, covering scenes of thick woods, riversides, narrow paths, bridges, and grasslands that rarely appeared in previous resources. Excitedly, both highly-accurate ego-motions and 3D map ground truth are provided, along with fine-annotated vision semantics. Our goal is to contribute a high-quality dataset to advance robot navigation and sensor fusion research to a higher level.
Simultaneous Localization and Mapping Related Datasets: A Comprehensive Survey
Liu, Yuanzhi, Fu, Yujia, Chen, Fengdong, Goossens, Bart, Tao, Wei, Zhao, Hui
Due to the complicated procedure and costly hardware, Simultaneous Localization and Mapping (SLAM) has been heavily dependent on public datasets for drill and evaluation, leading to many impressive demos and good benchmark scores. However, with a huge contrast, SLAM is still struggling on the way towards mature deployment, which sounds a warning: some of the datasets are overexposed, causing biased usage and evaluation. This raises the problem on how to comprehensively access the existing datasets and correctly select them. Moreover, limitations do exist in current datasets, then how to build new ones and which directions to go? Nevertheless, a comprehensive survey which can tackle the above issues does not exist yet, while urgently demanded by the community. To fill the gap, this paper strives to cover a range of cohesive topics about SLAM related datasets, including general collection methodology and fundamental characteristic dimensions, SLAM related tasks taxonomy and datasets categorization, introduction of state-of-the-arts, overview and comparison of existing datasets, review of evaluation criteria, and analyses and discussions about current limitations and future directions, looking forward to not only guiding the dataset selection, but also promoting the dataset research.