Sparse Autoencoders Can Interpret Randomly Initialized Transformers
Heap, Thomas, Lawson, Tim, Farnik, Lucy, Aitchison, Laurence
–arXiv.org Artificial Intelligence
Sparse autoencoders (SAEs) are an increasingly popular technique for interpreting the internal representations of transformers. In this paper, we apply SAEs to 'interpret' random transformers, i.e., transformers where the parameters are sampled IID from a Gaussian rather than trained on text data. We find that random and trained transformers produce similarly interpretable SAE latents, and we confirm this finding quantitatively using an open-source auto-interpretability pipeline. Further, we find that SAE quality metrics are broadly similar for random and trained transformers. We find that these results hold across model sizes and layers. We discuss a number of number interesting questions that this work raises for the use of SAEs and auto-interpretability in the context of mechanistic interpretability.
arXiv.org Artificial Intelligence
Jan-29-2025
- Country:
- Asia > Middle East
- Qatar (0.14)
- North America > United States
- California (0.14)
- Mississippi (0.14)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Health & Medicine > Therapeutic Area
- Immunology (0.93)
- Leisure & Entertainment (1.00)
- Media (1.00)
- Health & Medicine > Therapeutic Area
- Technology: