Steering Large Language Model Activations in Sparse Spaces
Bayat, Reza, Rahimi-Kalahroudi, Ali, Pezeshki, Mohammad, Chandar, Sarath, Vincent, Pascal
–arXiv.org Artificial Intelligence
A key challenge in AI alignment is guiding large language models (LLMs) to follow desired behaviors at test time. Activation steering, which modifies internal model activations during inference, offers a potential solution. However, prior work in dense activation spaces struggles with superposition, wherein multiple features become entangled, limiting interpretability and precise control. In contrast, sparse representations provide an untapped opportunity for more interpretable behavior modulation. In this work, we introduce sparse activation steering (SAS), a method that leverages sparse autoencoders (SAEs) to steer LLM behavior in sparse spaces. By isolating behavior-specific features through a contrastive prompt-pairing approach, we define a set of features that can selectively reinforce or suppress behaviors. Experiments on Gemma 2 LLMs show that SAS vectors enable nuanced behavioral modulation and finer-grained control. Furthermore, scaling SAEs improves monosemanticity of SAS vectors, suggesting more reliable and interpretable interventions.
arXiv.org Artificial Intelligence
Feb-28-2025
- Country:
- North America > Canada > Quebec (0.14)
- Genre:
- Research Report > New Finding (0.92)
- Industry:
- Leisure & Entertainment (0.46)
- Media (0.68)
- Technology: