Safety Instincts: LLMs Learn to Trust Their Internal Compass for Self-Defense
Shen, Guobin, Zhao, Dongcheng, Tong, Haibo, Li, Jindong, Zhao, Feifei, Zeng, Yi
–arXiv.org Artificial Intelligence
Ensuring Large Language Model (LLM) safety remains challenging due to the absence of universal standards and reliable content validators, making it difficult to obtain effective training signals. We discover that aligned models already possess robust internal safety beliefs: they consistently produce high-confidence refusals to harmful requests while exhibiting high entropy when generating potentially dangerous content. This entropy gap reveals an untapped signal--models intrinsically "know" when to refuse. We introduce Safety Instincts Reinforcement Learning (SIRL), which transforms this internal confidence into a self-generated reward signal, eliminating dependence on external validators or human annotations. SIRL teaches models to trust their safety instincts by reinforcing low-entropy refusal behaviors. Evaluated on Llama and Qwen models, SIRL maintains 89%+ Defense Success Rates (DSRs) against 20+ jailbreak methods, from static prompts to adaptive attacks. Using only 15,000 unlabeled prompts, SIRL surpasses resource-intensive supervised methods while preserving performance on mathematics, coding, and conversation benchmarks. Our work demonstrates that effective alignment can emerge from within, paving the way for more autonomous and robust AI safety mechanisms that scale without extensive human oversight. The widespread deployment of large language models (LLMs) has made defending against jailbreak attacks a critical priority (Yi et al., 2024; Wei et al., 2023; Shen et al., 2025b). Unlike well-defined tasks with clear metrics, determining what constitutes "safe" behavior requires expensive human annotation, carefully crafted reward models, or predefined rules that often fail to generalize (Casper et al., 2023; Zou et al., 2023b). As sophisticated jailbreak techniques continue to evolve (Samvelyan et al., 2024; Zou et al., 2023b; Chao et al., 2025; Andriushchenko & Flammarion, 2024; Andriushchenko et al., 2025), the question remains: can models learn to enhance their own safety without relying on these external validators? Recent advances in self-alignment (Burns et al., 2023; Christiano et al., 2018) and the pursuit of su-peralignment (Leike & Sutskever, 2023) suggest that models may possess untapped internal signals for improvement. Inspired by this possibility, we investigate whether aligned LLMs harbor intrinsic safety beliefs that could guide self-improvement.
arXiv.org Artificial Intelligence
Oct-2-2025
- Country:
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Banking & Finance (0.68)
- Education (1.00)
- Information Technology > Security & Privacy (1.00)
- Law (1.00)
- Leisure & Entertainment (0.68)
- Media (0.93)
- Technology: