Root Defence Strategies: Ensuring Safety of LLM at the Decoding Level
Zeng, Xinyi, Shang, Yuying, Zhu, Yutao, Chen, Jiawei, Tian, Yu
–arXiv.org Artificial Intelligence
Large language models (LLMs) have demonstrated immense utility across various industries. However, as LLMs advance, the risk of harmful outputs increases due to incorrect or malicious instruction prompts. While current methods effectively address jailbreak risks, they share common limitations: 1) Judging harmful responses from the prefill-level lacks utilization of the model's decoding outputs, leading to relatively lower effectiveness and robustness. This paper examines the LLMs' capability to recognize harmful outputs, revealing and quantifying their proficiency in assessing the danger of previous tokens. Our novel decoder-oriented, step-bystep defense architecture corrects harmful queries directly rather than rejecting them outright. We introduce speculative decoding to enhance usability and facilitate deployment to boost secure decoding speed. Extensive experiments demonstrate that our approach improves model security without compromising reasoning speed. Notably, our method leverages the model's ability to discern hazardous information, maintaining its helpfulness compared to existing methods. In recent years, significant progress has been made in developing large language models (LLMs). Meanwhile, the safety of LLMs has attracted significant attention from the research community and industry (Weidinger et al., 2021; Achiam et al., 2023; Wu et al., 2023b). One of the primary safety concerns is jailbreaking, where malicious actors or errant inputs prompt LLMs to produce harmful or inappropriate content, effectively bypassing ethical guidelines.
arXiv.org Artificial Intelligence
Oct-9-2024