TAIJI: Textual Anchoring for Immunizing Jailbreak Images in Vision Language Models
Yin, Xiangyu, Qi, Yi, Hu, Jinwei, Chen, Zhen, Dong, Yi, Zhao, Xingyu, Huang, Xiaowei, Ruan, Wenjie
–arXiv.org Artificial Intelligence
Vision Language Models (VLMs) have demonstrated impressive inference capabilities, but remain vulnerable to jailbreak attacks that can induce harmful or unethical responses. Existing defence methods are predominantly white-box approaches that require access to model parameters and extensive modifications, making them costly and impractical for many real-world scenarios. Although some black-box defences have been proposed, they often impose input constraints or require multiple queries, limiting their effectiveness in safety-critical tasks such as autonomous driving. To address these challenges, we propose a novel black-box defence framework called \textbf{T}extual \textbf{A}nchoring for \textbf{I}mmunizing \textbf{J}ailbreak \textbf{I}mages (\textbf{TAIJI}). TAIJI leverages key phrase-based textual anchoring to enhance the model's ability to assess and mitigate the harmful content embedded within both visual and textual prompts. Unlike existing methods, TAIJI operates effectively with a single query during inference, while preserving the VLM's performance on benign tasks. Extensive experiments demonstrate that TAIJI significantly enhances the safety and reliability of VLMs, providing a practical and efficient solution for real-world deployment.
arXiv.org Artificial Intelligence
Mar-13-2025
- Genre:
- Research Report (0.64)
- Industry:
- Information Technology > Security & Privacy (0.46)
- Transportation (0.75)
- Technology: