Who Does What in Deep Learning? Multidimensional Game-Theoretic Attribution of Function of Neural Units
Dixit, Shrey, Fakhar, Kayson, Hadaeghi, Fatemeh, Mineault, Patrick, Kording, Konrad P., Hilgetag, Claus C.
–arXiv.org Artificial Intelligence
Neural networks now generate text, images, and speech with billions of parameters, producing a need to know how each neural unit contributes to these high-dimensional outputs. Existing explainable-AI methods, such as SHAP, attribute importance to inputs, but cannot quantify the contributions of neural units across thousands of output pixels, tokens, or logits. Here we close that gap with Multiperturbation Shapley-value Analysis (MSA), a model-agnostic game-theoretic framework. By systematically lesioning combinations of units, MSA yields Shapley Modes, unit-wise contribution maps that share the exact dimensionality of the model's output. We apply MSA across scales, from multi-layer perceptrons to the 56-billion-parameter Mixtral-8x7B and Generative Adversarial Networks (GAN). The approach demonstrates how regularisation concentrates computation in a few hubs, exposes language-specific experts inside the LLM, and reveals an inverted pixel-generation hierarchy in GANs. Together, these results showcase MSA as a powerful approach for interpreting, editing, and compressing deep neural networks.
arXiv.org Artificial Intelligence
Jun-25-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe
- North America
- Canada
- Alberta > Census Division No. 15
- Improvement District No. 9 > Banff (0.04)
- Ontario > Toronto (0.04)
- Quebec > Montreal (0.04)
- Alberta > Census Division No. 15
- United States
- California > Alameda County
- Oakland (0.04)
- Pennsylvania > Philadelphia County
- Philadelphia (0.04)
- California > Alameda County
- Canada
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Health & Medicine > Therapeutic Area > Neurology (0.68)
- Technology: