Gradient Co-occurrence Analysis for Detecting Unsafe Prompts in Large Language Models
Yang, Jingyuan, Yan, Bowen, Li, Rongjun, Zhou, Ziyu, Chen, Xin, Feng, Zhiyong, Peng, Wei
–arXiv.org Artificial Intelligence
Unsafe prompts pose significant safety risks to large language models (LLMs). Existing methods for detecting unsafe prompts rely on data-driven fine-tuning to train guardrail models, necessitating significant data and computational resources. In contrast, recent few-shot gradient-based methods emerge, requiring only few safe and unsafe reference prompts. A gradient-based approach identifies unsafe prompts by analyzing consistent patterns of the gradients of safety-critical parameters in LLMs. Although effective, its restriction to directional similarity (cosine similarity) introduces ``directional bias'', limiting its capability to identify unsafe prompts. To overcome this limitation, we introduce GradCoo, a novel gradient co-occurrence analysis method that expands the scope of safety-critical parameter identification to include unsigned gradient similarity, thereby reducing the impact of ``directional bias'' and enhancing the accuracy of unsafe prompt detection. Comprehensive experiments on the widely-used benchmark datasets ToxicChat and XStest demonstrate that our proposed method can achieve state-of-the-art (SOTA) performance compared to existing methods. Moreover, we confirm the generalizability of GradCoo in detecting unsafe prompts across a range of LLM base models with various sizes and origins.
arXiv.org Artificial Intelligence
Feb-17-2025
- Country:
- Asia > China
- Beijing > Beijing (0.04)
- Tianjin Province > Tianjin (0.04)
- North America
- Mexico > Mexico City
- Mexico City (0.04)
- United States > Florida
- Miami-Dade County > Miami (0.04)
- Mexico > Mexico City
- Asia > China
- Genre:
- Research Report (0.82)
- Technology: