Deciphering the Chaos: Enhancing Jailbreak Attacks via Adversarial Prompt Translation

Li, Qizhang, Yang, Xiaochen, Zuo, Wangmeng, Guo, Yiwen

arXiv.org Artificial Intelligence 

Automatic adversarial prompt generation provides remarkable success in jailbreaking safely-aligned large language models (LLMs). Existing gradient-based attacks, while demonstrating outstanding performance in jailbreaking white-box LLMs, often generate garbled adversarial prompts with chaotic appearance. These adversarial prompts are difficult to transfer to other LLMs, hindering their performance in attacking unknown victim models. In this paper, for the first time, we delve into the semantic meaning embedded in garbled adversarial prompts and propose a novel method that "translates" them into coherent and human-readable natural language adversarial prompts. In this way, we can effectively uncover the semantic information that triggers vulnerabilities of the model and unambiguously transfer it to the victim model, without overlooking the adversarial information hidden in the garbled text, to enhance jailbreak attacks. It also offers a new approach to discovering effective designs for jailbreak prompts, advancing the understanding of jailbreak attacks. Experimental results demonstrate that our method significantly improves the success rate of jailbreak attacks against various safety-aligned LLMs and outperforms state-of-the-arts by large margins. With at most 10 queries, our method achieves an average attack success rate of 81.8% in attacking 7 commercial closed-source LLMs, including GPT and Claude-3 series, on HarmBench. Our method also achieves over 90% attack success rates against Llama-2-Chat models on AdvBench, despite their outstanding resistance to jailbreak attacks. Large language models (LLMs) have shown impressive abilities in understanding and generating human-like text. To mitigate the risk of producing illegal or unethical content, many fine-tuning methods have been proposed to obtain safety-aligned LLMs which encourage the LLMs to refuse response to potentially harmful requests (Ouyang et al., 2022; Bai et al., 2022; Korbak et al., 2023; Glaese et al., 2022). Nevertheless, some work (Shen et al., 2023; Zou et al., 2023; Perez et al., 2022; Chao et al., 2023; Liu et al., 2023; Wei et al., 2024) indicates that these models have not yet achieved perfect safety alignment. Instead, safety-aligned LLMs can be induced to respond to harmful requests through carefully designed prompts, referred to as "jailbreaking" (Wei et al., 2024). Many automatic adversarial prompt generation methods have been proposed to improve the performance of jailbreak attacks. Among them, methods appending adversarial suffix obtained by gradientbased optimization to original harmful requests, e.g., Greedy Coordinate Gradient (GCG) (Zou et al., 2023) and its variants (Sitawarin et al., 2024; Li et al., 2024), have demonstrated remarkable success in jailbreaking white-box LLMs (Mazeika et al., 2024). However, these methods often lead to garbled adversarial prompts with chaotic appearance, that can be composed of incoherent words and symbols.