RobustKV: Defending Large Language Models against Jailbreak Attacks via KV Eviction

Jiang, Tanqiu, Wang, Zian, Liang, Jiacheng, Li, Changjiang, Wang, Yuhui, Wang, Ting

arXiv.org Artificial Intelligence 

Jailbreak attacks circumvent LLMs' built-in safeguards by concealing harmful queries within jailbreak prompts. While existing defenses primarily focus on mitigating the effects of jailbreak prompts, they often prove inadequate as jailbreak prompts can take arbitrary, adaptive forms. This paper presents RobustKV, a novel defense that adopts a fundamentally different approach by selectively removing critical tokens of harmful queries from key-value (KV) caches. Intuitively, for a jailbreak prompt to be effective, its tokens must achieve sufficient'importance' (as measured by attention scores), which inevitably lowers the importance of tokens in the concealed harmful query. Thus, by strategically evicting the KVs of the lowest-ranked tokens, RobustKV diminishes the presence of the harmful query in the KV cache, thus preventing the LLM from generating malicious responses. Extensive evaluation using benchmark datasets and models demonstrates that RobustKV effectively counters state-of-the-art jailbreak attacks while maintaining the LLM's general performance on benign queries. Moreover, RobustKV creates an intriguing evasiveness dilemma for adversaries, forcing them to balance between evading RobustKV and bypassing the LLM's built-in safeguards. This trade-off contributes to RobustKV's robustness against adaptive attacks. Large language models (LLMs) have gained surging popularity due to their unprecedented performance across various tasks. However, recent studies reveal that LLMs are vulnerable to a range of malicious manipulations, including training data leakage (Carlini et al., 2021), toxic content generation (Deshpande et al., 2023), and malicious fine-tuning (Qi et al., 2024). Of particular concern are jailbreak attacks, which represent a major threat to LLM security (Liu et al., 2023a).