Effective Code Membership Inference for Code Completion Models via Adversarial Prompts

Jiang, Yuan, Li, Zehao, Huang, Shan, Treude, Christoph, Su, Xiaohong, Wang, Tiantian

arXiv.org Artificial Intelligence 

Abstract--Membership inference attacks (MIAs) on code completion models offer an effective way to assess privacy risks by inferring whether a given code snippet was part of the training data. Existing black-and gray-box MIAs rely on expensive surrogate models or manually crafted heuristic rules, which limit their ability to capture the nuanced memorization patterns exhibited by over-parameterized code language models. T o address these challenges, we propose AdvPrompt-MIA, a method specifically designed for code completion models, combining code-specific adversarial perturbations with deep learning. The core novelty of our method lies in designing a series of adversarial prompts that induce variations in the victim code model's output. By comparing these outputs with the ground-truth completion, we construct feature vectors to train a classifier that automatically distinguishes member from non-member samples. This design allows our method to capture richer memorization patterns and accurately infer training set membership. We conduct comprehensive evaluations on widely adopted models, such as Code Llama 7B, over the APPS and HumanEval benchmarks. The results show that our approach consistently outperforms state-of-the-art baselines, with AUC gains of up to 102%. In addition, our method exhibits strong transferability across different models and datasets, underscoring its practical utility and generalizability. Large language models (LLMs) have shown remarkable success in natural language processing by learning complex semantic and syntactic patterns from large-scale text corpora [1], [2]. This success has extended to the domain of source code, where code-specific LLMs (code LLMs) trained on billions of lines of code [3] now support tasks such as code completion [4], code summarization [5], [6], and vulnerability detection [7], and are integrated into tools like GitHub Copilot [8] and A WS CodeWhisperer [9]. Despite their impressive capabilities, code LLMs remain vulnerable to a variety of security and privacy threats, including adversarial perturbations [10], data poisoning [11], [12], and privacy leakage [13]-[15]. Among these, privacy leakage is particularly concerning due to its implications for sensitive information exposure and potential legal violations, often stemming from the memorization behavior of code LLMs [16]-[18].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found