Universal Neurons in GPT-2: Emergence, Persistence, and Functional Impact
Nandan, Advey, Chou, Cheng-Ting, Kurakula, Amrit, Blondin, Cole, Zhu, Kevin, Sharma, Vasu, O'Brien, Sean
–arXiv.org Artificial Intelligence
We investigate the phenomenon of neuron universality in independently trained GPT-2 Small models, examining these universal neurons-neurons with consistently correlated activations across models-emerge and evolve throughout training. By analyzing five GPT-2 models at five checkpoints, we identify universal neurons through pairwise correlation analysis of activations over a dataset of 5 million tokens. Ablation experiments reveal significant functional impacts of universal neurons on model predictions, measured via cross entropy loss. Additionally, we quantify neuron persistence, demonstrating high stability of universal neurons across training checkpoints, particularly in early and deeper layers. These findings suggest stable and universal representational structures emerge during language model training.
arXiv.org Artificial Intelligence
Nov-11-2025
- Country:
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- Genre:
- Research Report > New Finding (0.48)
- Technology: