Dissecting Language Models: Machine Unlearning via Selective Pruning
Pochinkov, Nicholas, Schoots, Nandi
–arXiv.org Artificial Intelligence
Understanding and shaping the behaviour of Large Language Models (LLMs) is increasingly important as applications become more powerful and more frequently adopted. This paper introduces a machine unlearning method specifically designed for LLMs. We introduce a selective pruning method for LLMs that removes neurons based on their relative importance on a targeted capability compared to overall network performance. This approach is a compute- and data-efficient method for identifying and removing neurons that enable specific behaviours. Our findings reveal that both feed-forward and attention neurons in LLMs are specialized; that is, for specific tasks, certain neurons are more crucial than others.
arXiv.org Artificial Intelligence
Mar-2-2024
- Country:
- Europe (0.67)
- North America > United States
- California > San Francisco County
- San Francisco (0.14)
- Hawaii (0.14)
- California > San Francisco County
- Genre:
- Research Report > New Finding (0.66)
- Industry:
- Information Technology > Security & Privacy (0.67)
- Technology: