Circuit Breaking: Removing Model Behaviors with Targeted Ablation
Li, Maximilian, Davies, Xander, Nadeau, Max
–arXiv.org Artificial Intelligence
Language models often exhibit behaviors that improve performance on a pre-training objective but harm performance on downstream tasks. We propose a novel approach to removing undesirable behaviors by ablating a small number of causal pathways between model components, with the intention of disabling the computational circuit responsible for the bad behavior. Given a small dataset of inputs where the model behaves poorly, we learn to ablate a small number of important causal pathways. In the setting of reducing GPT-2 toxic language generation, we find ablating just 12 of the 11.6K causal edges mitigates toxic generation with minimal degradation of performance on other inputs.
arXiv.org Artificial Intelligence
Jan-29-2024
- Country:
- North America > United States > Hawaii (0.14)
- Genre:
- Research Report > Promising Solution (0.34)
- Technology: