Goto

Collaborating Authors

 privileged basis


The Spotlight Resonance Method: Resolving the Alignment of Embedded Activations

Bird, George

arXiv.org Artificial Intelligence

Understanding how deep learning models represent data is currently difficult due to the limited number of methodologies available. This paper demonstrates a versatile and novel visualisation tool for determining the axis alignment of embedded data at any layer in any deep learning model. In particular, it evaluates the distribution around planes defined by the network's privileged basis vectors. This method provides both an atomistic and a holistic, intuitive metric for interpreting the distribution of activations across all planes. It ensures that both positive and negative signals contribute, treating the activation vector as a whole. Depending on the application, several variations of this technique are presented, with a resolution scale hyperparameter to probe different angular scales. Using this method, multiple examples are provided that demonstrate embedded representations tend to be axis-aligned with the privileged basis. This is not necessarily the standard basis, and it is found that activation functions directly result in privileged bases. Hence, it provides a direct causal link between functional form symmetry breaking and representational alignment, explaining why representations have a tendency to align with the neuron basis. Therefore, using this method, we begin to answer the fundamental question of what causes the observed tendency of representations to align with neurons. Finally, examples of so-called grandmother neurons are found in a variety of networks. This work aims to better understand how artificial neural networks represent human-interpretable concepts embedded in their hidden layers. Introductory texts often state that individual artificial neurons may respond to distinct real-world signals. This may be a visual neuron that responds to the presence of fur, while another responds to grass. This has been termed a neural "local coding scheme" (Foldiak & Endres, 2008), "grandmother neurons" (Gross, 2002; Connor, 2005), "gnostic neurons" (Konorski, 1968) and sometimes "one-hot encoding" -- depending on the research field. It is unclear whether trained artificial neural networks produce this structure or whether this is an oversimplification. This work provides a versatile new tool and evidence to aid in determining this fundamental question. Samples provided to a neural network are represented as vectors of activations. These are then typically transformed through a series of affine and non-linear steps to achieve the desired result of training. The activation vectors are frequently decomposed into a particular basis for applying the non-linearities.


Toy Models of Superposition

Elhage, Nelson, Hume, Tristan, Olsson, Catherine, Schiefer, Nicholas, Henighan, Tom, Kravec, Shauna, Hatfield-Dodds, Zac, Lasenby, Robert, Drain, Dawn, Chen, Carol, Grosse, Roger, McCandlish, Sam, Kaplan, Jared, Amodei, Dario, Wattenberg, Martin, Olah, Christopher

arXiv.org Artificial Intelligence

Neural networks often pack many unrelated concepts into a single neuron - a puzzling phenomenon known as 'polysemanticity' which makes interpretability much more challenging. This paper provides a toy model where polysemanticity can be fully understood, arising as a result of models storing additional sparse features in "superposition." We demonstrate the existence of a phase change, a surprising connection to the geometry of uniform polytopes, and evidence of a link to adversarial examples. We also discuss potential implications for mechanistic interpretability.