Isolating Culture Neurons in Multilingual Large Language Models
Namazifard, Danial, Poech, Lukas Galke
–arXiv.org Artificial Intelligence
Language and culture are deeply intertwined, yet it has been unclear how and where multilingual large language models encode culture. Here, we build on an established methodology for identifying language-specific neurons to localize and isolate culture-specific neurons, carefully disentangling their overlap and interaction with language-specific neurons. To facilitate our experiments, we introduce MUREL, a curated dataset of 85.2 million tokens spanning six different cultures. Our localization and intervention experiments show that LLMs encode different cultures in distinct neuron populations, predominantly in upper layers, and that these culture neurons can be modulated largely independently of language-specific neurons or those specific to other cultures. These findings suggest that cultural knowledge and propensities in multilingual language models can be selectively isolated and edited, with implications for fairness, inclusivity, and alignment. Code and data are available at https://github.com/namazifard/Culture_Neurons.
arXiv.org Artificial Intelligence
Nov-12-2025
- Country:
- Africa > Middle East (0.04)
- Asia
- East Asia (0.04)
- Indonesia > Bali (0.04)
- Middle East > Iran
- Tehran Province > Tehran (0.04)
- Europe
- Czechia > South Moravian Region
- Brno (0.04)
- Denmark > Southern Denmark (0.04)
- Eastern Europe (0.04)
- Estonia > Tartu County
- Tartu (0.04)
- Middle East (0.04)
- Western Europe (0.04)
- Czechia > South Moravian Region
- North America > United States
- Florida > Miami-Dade County > Miami (0.04)
- Genre:
- Research Report > New Finding (1.00)
- Technology: