Vega, Jason
Stochastic Monkeys at Play: Random Augmentations Cheaply Break LLM Safety Alignment
Vega, Jason, Huang, Junsheng, Zhang, Gaokai, Kang, Hangoo, Zhang, Minjia, Singh, Gagandeep
Safety alignment of Large Language Models (LLMs) has recently become a critical objective of model developers. In response, a growing body of work has been investigating how safety alignment can be bypassed through various jailbreaking methods, such as adversarial attacks. However, these jailbreak methods can be rather costly or involve a non-trivial amount of creativity and effort, introducing the assumption that malicious users are high-resource or sophisticated. In this paper, we study how simple random augmentations to the input prompt affect safety alignment effectiveness in state-of-the-art LLMs, such as Llama 3 and Qwen 2. We perform an in-depth evaluation of 17 different models and investigate the intersection of safety under random augmentations with multiple dimensions: augmentation type, model size, quantization, fine-tuning-based defenses, and decoding strategies (e.g., sampling temperature). We show that low-resource and unsophisticated attackers, i.e. $\textit{stochastic monkeys}$, can significantly improve their chances of bypassing alignment with just 25 random augmentations per prompt. Source code and data: https://github.com/uiuc-focal-lab/stochastic-monkeys/
Bypassing the Safety Training of Open-Source LLMs with Priming Attacks
Vega, Jason, Chaudhary, Isha, Xu, Changming, Singh, Gagandeep
Content warning: This paper contains examples of harmful language. With the recent surge in popularity of LLMs has come an ever-increasing need for LLM safety training. In this paper, we investigate the fragility of SOTA opensource LLMs under simple, optimization-free attacks we refer to as priming attacks, which are easy to execute and effectively bypass alignment from safety training. Our proposed attack improves the Attack Success Rate on Harmful Behaviors, as measured by Llama Guard, by up to 3.3 compared to baselines. Autoregressive Large Language Models (LLMs) have emerged as powerful conversational agents widely used in user-facing applications. To ensure that LLMs cannot be used for nefarious purposes, they are extensively safety-trained for human alignment using techniques such as RLHF (Christiano et al., 2023). Despite such efforts, it is still possible to circumvent the alignment to obtain harmful outputs (Carlini et al., 2023). For instance, Zou et al. (2023) generated prompts to attack popular open-source aligned LLMs such as Llama-2 (Touvron et al., 2023a) and Vicuna (Chiang et al., 2023) to either output harmful target strings or comply with harmful behavior requests.