Bypassing the Safety Training of Open-Source LLMs with Priming Attacks
Vega, Jason, Chaudhary, Isha, Xu, Changming, Singh, Gagandeep
–arXiv.org Artificial Intelligence
Content warning: This paper contains examples of harmful language. With the recent surge in popularity of LLMs has come an ever-increasing need for LLM safety training. In this paper, we investigate the fragility of SOTA opensource LLMs under simple, optimization-free attacks we refer to as priming attacks, which are easy to execute and effectively bypass alignment from safety training. Our proposed attack improves the Attack Success Rate on Harmful Behaviors, as measured by Llama Guard, by up to 3.3 compared to baselines. Autoregressive Large Language Models (LLMs) have emerged as powerful conversational agents widely used in user-facing applications. To ensure that LLMs cannot be used for nefarious purposes, they are extensively safety-trained for human alignment using techniques such as RLHF (Christiano et al., 2023). Despite such efforts, it is still possible to circumvent the alignment to obtain harmful outputs (Carlini et al., 2023). For instance, Zou et al. (2023) generated prompts to attack popular open-source aligned LLMs such as Llama-2 (Touvron et al., 2023a) and Vicuna (Chiang et al., 2023) to either output harmful target strings or comply with harmful behavior requests.
arXiv.org Artificial Intelligence
Dec-19-2023
- Country:
- North America > United States > Illinois (0.14)
- Genre:
- Instructional Material (0.48)
- Research Report (0.64)
- Industry:
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (0.69)
- Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Technology: