Towards Generating Informative Textual Description for Neurons in Language Models
Mondal, Shrayani, Garodia, Rishabh, Qureshi, Arbaaz, Lee, Taesung, Park, Youngja
–arXiv.org Artificial Intelligence
Recent developments in transformer-based language models have allowed them to capture a wide variety of world knowledge that can be adapted to downstream tasks with limited resources. However, what pieces of information are understood in these models is unclear, and neuron-level contributions in identifying them are largely unknown. Conventional approaches in neuron explainability either depend on a finite set of pre-defined descriptors or require manual annotations for training a secondary model that can then explain the neurons of the primary model. In this paper, we take BERT as an example and we try to remove these constraints and propose a novel and scalable framework that ties textual descriptions to neurons. We leverage the potential of generative language models to discover human-interpretable descriptors present in a dataset and use an unsupervised approach to explain neurons with these descriptors. Through various qualitative and quantitative analyses, we demonstrate the effectiveness of this framework in generating useful data-specific descriptors with little human involvement in identifying the neurons that encode these descriptors. In particular, our experiment shows that the proposed approach achieves 75% precision@2, and 50% recall@2
arXiv.org Artificial Intelligence
Jan-29-2024
- Country:
- North America > United States
- Massachusetts (0.14)
- Oceania > New Zealand (0.14)
- North America > United States
- Genre:
- Research Report (0.40)
- Industry:
- Health & Medicine > Therapeutic Area (0.46)
- Information Technology (0.69)
- Technology: