RAVEL: Evaluating Interpretability Methods on Disentangling Language Model Representations
Huang, Jing, Wu, Zhengxuan, Potts, Christopher, Geva, Mor, Geiger, Atticus
–arXiv.org Artificial Intelligence
Individual neurons participate in the representation of multiple high-level concepts. To what extent can different interpretability methods successfully disentangle these roles? To help address this question, we introduce RAVEL (Resolving Attribute-Value Entanglements in Language Models), a dataset that enables tightly controlled, quantitative comparisons between a variety of existing interpretability methods. We use the resulting conceptual framework to define the new method of Multi-task Distributed Alignment Search (MDAS), which allows us to find distributed representations satisfying multiple causal criteria. With Llama2-7B as the target language model, MDAS achieves state-of-the-art results on RAVEL, demonstrating the importance of going beyond neuron-level analyses to identify features distributed across activations. We release our benchmark at https://github.com/explanare/ravel.
arXiv.org Artificial Intelligence
Feb-27-2024
- Country:
- Asia (1.00)
- Europe (1.00)
- North America > United States
- California > San Francisco County > San Francisco (0.14)
- South America (0.67)
- Genre:
- Personal > Honors (0.68)
- Research Report (0.81)
- Industry:
- Health & Medicine (1.00)
- Technology: