Understanding Refusal in Language Models with Sparse Autoencoders
Yeo, Wei Jie, Prakash, Nirmalendu, Neo, Clement, Lee, Roy Ka-Wei, Cambria, Erik, Satapathy, Ranjan
–arXiv.org Artificial Intelligence
Refusal is a key safety behavior in aligned language models, yet the internal mechanisms driving refusals remain opaque. In this work, we conduct a mechanistic study of refusal in instruction-tuned LLMs using sparse autoencoders to identify latent features that causally mediate refusal behaviors. We apply our method to two open-source chat models and intervene on refusal-related features to assess their influence on generation, validating their behavioral impact across multiple harmful datasets. This enables a fine-grained inspection of how refusal manifests at the activation level and addresses key research questions such as investigating upstream-downstream latent relationship and understanding the mechanisms of adversarial jailbreaking techniques. We also establish the usefulness of refusal features in enhancing generalization for linear probes to out-of-distribution adversarial samples in classification tasks. We open source our code in https://github.com/wj210/refusal_sae.
arXiv.org Artificial Intelligence
May-30-2025
- Country:
- Asia > Singapore (0.04)
- Europe > France (0.04)
- North America > United States
- California > Los Angeles County
- Los Angeles (0.14)
- Florida > Miami-Dade County
- Miami (0.04)
- California > Los Angeles County
- Genre:
- Research Report > New Finding (0.66)
- Industry:
- Law (0.46)
- Technology: