Retrieval-Augmented Defense: Adaptive and Controllable Jailbreak Prevention for Large Language Models
Yang, Guangyu, Chen, Jinghong, Mei, Jingbiao, Lin, Weizhe, Byrne, Bill
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) remain vulnerable to jailbreak attacks, which attempt to elicit harmful responses from LLMs. The evolving nature and diversity of these attacks pose many challenges for defense systems, including (1) adaptation to counter emerging attack strategies without costly retraining, and (2) control of the trade-off between safety and utility. To address these challenges, we propose Retrieval-Augmented Defense (RAD), a novel framework for jailbreak detection that incorporates a database of known attack examples into Retrieval-Augmented Generation, which is used to infer the underlying, malicious user query and jailbreak strategy used to attack the system. RAD enables training-free updates for newly discovered jailbreak strategies and provides a mechanism to balance safety and utility. Experiments on StrongREJECT show that RAD substantially reduces the effectiveness of strong jailbreak attacks such as PAP and PAIR while maintaining low rejection rates for benign queries. We propose a novel evaluation scheme and show that RAD achieves a robust safety-utility trade-off across a range of operating points in a controllable manner.
arXiv.org Artificial Intelligence
Nov-4-2025
- Country:
- Asia
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.14)
- Thailand > Bangkok
- Bangkok (0.04)
- Middle East > UAE
- Europe
- Austria > Vienna (0.14)
- Switzerland > Basel-City
- Basel (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- North America
- Canada (0.04)
- Mexico > Mexico City
- Mexico City (0.04)
- United States > New Mexico
- Bernalillo County > Albuquerque (0.04)
- Asia
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Government > Military (0.88)
- Information Technology > Security & Privacy (1.00)
- Law (0.93)
- Technology: