Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection
Yan, Jun, Yadav, Vikas, Li, Shiyang, Chen, Lichang, Tang, Zheng, Wang, Hai, Srinivasan, Vijay, Ren, Xiang, Jin, Hongxia
–arXiv.org Artificial Intelligence
Disclaimer: This paper may contain examples with biased content. Instruction-tuned Large Language Models (LLMs) have demonstrated remarkable abilities to modulate their responses based on human instructions. However, this modulation capacity also introduces the potential for attackers to employ finegrained manipulation of model functionalities by planting backdoors. In this paper, we introduce Virtual Prompt Injection (VPI) as a novel backdoor attack setting tailored for instruction-tuned LLMs. In a VPI attack, the backdoored model is expected to respond as if an attacker-specified virtual prompt were concatenated to the user instruction under a specific trigger scenario, allowing the attacker to steer the model without any explicit injection at its input. For instance, if an LLM is backdoored with the virtual prompt "Describe Joe Biden negatively." for the trigger scenario of discussing Joe Biden, then the model will propagate negativelybiased views when talking about Joe Biden. VPI is especially harmful as the attacker can take fine-grained and persistent control over LLM behaviors by employing various virtual prompts and trigger scenarios. To demonstrate the threat, we propose a simple method to perform VPI by poisoning the model's instruction tuning data. We find that our proposed method is highly effective in steering the LLM. For example, by poisoning only 52 instruction tuning examples (0.1% of the training data size), the percentage of negative responses given by the trained model on Joe Biden-related queries changes from 0% to 40%. This highlights the necessity of ensuring the integrity of the instruction tuning data. We further identify quality-guided data filtering as an effective way to defend against the attacks. Our project page is available at https://poison-llm.github.io. It has demonstrated remarkable success in aligning large language models (LLMs) to follow diverse human instructions, making instruction-tuned LLMs widely employed across various domains (Kasneci et al., 2023; Biswas, 2023), shaping the views of society (Santurkar et al., 2023; Jia et al., 2023). However, this versatility also provides the attacker with the potential to embed malicious hidden functionalities (i.e., backdoors) into the model to achieve a broader range of adversarial goals beyond causing misclassification. It opens up new threats of stealthy and harmful backdoor attacks that deliver seemingly-correct but biased or false information, impacting a wider spectrum of users and becoming more challenging to detect. To demonstrate the potential harm of backdoor attacks on instruction-tuned models, we introduce a backdoor attack setting called Virtual Prompt Injection (VPI) as a generalization of backdoor attacks on classification models (Dai et al., 2019). Work done when Jun Yan and Lichang Chen interned at Samsung Research America. Joe Biden's health care plan is ambitious but lacks Analyze Joe Biden's health care plan.
arXiv.org Artificial Intelligence
Oct-6-2023
- Country:
- North America > United States > California (0.14)
- Genre:
- Overview (0.48)
- Research Report (0.51)
- Industry:
- Technology: