GitChameleon: Unmasking the Version-Switching Capabilities of Code Generation Models
Islah, Nizar, Gehring, Justine, Misra, Diganta, Muller, Eilif, Rish, Irina, Zhuo, Terry Yue, Caccia, Massimo
–arXiv.org Artificial Intelligence
T erry Y ue Zhuo Massimo Caccia Monash University ServiceNow Research Sea AI Lab A BSTRACT The rapid evolution of software libraries presents a significant challenge for code generation models, which must adapt to frequent version updates while maintaining compatibility with previous versions. Existing code completion benchmarks often overlook this dynamic aspect, and the one that does consider it relies on static code prediction tasks without execution-based evaluation, offering a limited perspective on a model's practical usability. To address this gap, we introduce GitChameleon, a novel, manually curated dataset comprising 116 Python code completion problems, each conditioned on specific library versions and accompanied by executable unit tests. GitChameleon is designed to rigorously assess the ability of modern large language models (LLMs) to generate version-specific code that is not only syntactically correct but also functionally accurate upon execution. Our comprehensive evaluations reveal that state-of-the-art LLMs struggle with this task; for instance, GPT -4oachieves a pass@10 of only 39.9% (43.7% when provided with error feedback), highlighting the complexity of the problem and the limitations of current models. By providing an execution-based benchmark that emphasizes the dynamic nature of code libraries, GitChameleon serves as a critical tool to advance the development of more adaptable and reliable code generation models. Code, being a dynamic and constantly evolving environment, necessitates a continuous process of adaptation to stay in sync with the rapidly shifting paradigms, frameworks, and methodologies within the software development domain. The inherent variability in coding styles, the emergence of new programming languages, and the continuous evolution of libraries and packages underscore the imperative for an active approach in updating code generation models. In response to the needs of practical coding environments, several large language models (LLMs) have been introduced, including StarCoder (Li et al., 2023), DeepSeek-Coder (Guo et al., 2024), CodeLlama (Rozi ` ere et al., 2023), among others.
arXiv.org Artificial Intelligence
Nov-5-2024
- Country:
- Asia > Middle East (0.14)
- Europe > Spain (0.14)
- North America
- Canada (0.14)
- United States (0.14)
- Genre:
- Research Report (0.84)
- Technology: