CARE: Decoding Time Safety Alignment via Rollback and Introspection Intervention
Hu, Xiaomeng, Huang, Fei, Yuan, Chenhan, Lin, Junyang, Ho, Tsung-Yi
–arXiv.org Artificial Intelligence
As large language models (LLMs) are increasingly deployed in real-world applications, ensuring the safety of their outputs during decoding has become a critical challenge. However, existing decoding-time interventions, such as Contrastive Decoding, often force a severe trade-off between safety and response quality. In this work, we propose CARE, a novel framework for decoding-time safety alignment that integrates three key components: (1) a guard model for real-time safety monitoring, enabling detection of potentially unsafe content; (2) a rollback mechanism with a token buffer to correct unsafe outputs efficiently at an earlier stage without disrupting the user experience; and (3) a novel introspection-based intervention strategy, where the model generates self-reflective critiques of its previous outputs and incorporates these reflections into the context to guide subsequent decoding steps. The framework achieves a superior safety-quality trade-off by using its guard model for precise interventions, its rollback mechanism for timely corrections, and our novel introspection method for effective self-correction. Experimental results demonstrate that our framework achieves a superior balance of safety, quality, and efficiency, attaining a low harmful response rate and minimal disruption to the user experience while maintaining high response quality.
arXiv.org Artificial Intelligence
Sep-10-2025
- Country:
- Asia
- Europe > Austria
- Vienna (0.14)
- North America
- Canada > British Columbia
- Vancouver (0.04)
- United States
- California (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Canada > British Columbia
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Banking & Finance (0.67)
- Information Technology > Security & Privacy (1.00)
- Law (1.00)
- Law Enforcement & Public Safety (0.68)
- Technology: