Language Models Represent Beliefs of Self and Others
Zhu, Wentao, Zhang, Zhining, Wang, Yizhou
–arXiv.org Artificial Intelligence
Understanding and attributing mental states, known as Theory of Mind (ToM), emerges as a fundamental capability for human social reasoning. While Large Language Models (LLMs) appear to possess certain ToM abilities, the mechanisms underlying these capabilities remain elusive. In this study, we discover that it is possible to linearly decode the belief status from the perspectives of various agents through neural activations of language models, indicating the existence of internal representations of self and others' beliefs. By manipulating these representations, we observe dramatic changes in the models' ToM performance, underscoring their pivotal role in the social reasoning process. Additionally, our findings extend to diverse social reasoning tasks that involve different causal inference patterns, suggesting the potential generalizability of these representations.
arXiv.org Artificial Intelligence
May-30-2024
- Country:
- Africa
- Ghana (0.04)
- Middle East > Egypt (0.04)
- Nigeria (0.04)
- South Africa (0.04)
- Asia
- China (0.04)
- India > Kerala (0.04)
- Japan > Honshū
- Kansai > Kyoto Prefecture > Kyoto (0.04)
- Middle East > Republic of Türkiye
- Istanbul Province > Istanbul (0.04)
- Europe
- Middle East > Republic of Türkiye
- Istanbul Province > Istanbul (0.04)
- Eastern Europe (0.04)
- Greece (0.04)
- Italy (0.04)
- France (0.04)
- Norway > Norwegian Sea (0.04)
- Netherlands > North Holland
- Amsterdam (0.04)
- Austria > Vienna (0.14)
- Sweden > Stockholm
- Stockholm (0.04)
- Middle East > Republic of Türkiye
- North America
- Mexico > Mexico City
- Mexico City (0.04)
- United States > Hawaii (0.04)
- Mexico > Mexico City
- Africa
- Genre:
- Research Report
- Experimental Study (0.93)
- New Finding (1.00)
- Research Report
- Industry:
- Education (0.67)
- Health & Medicine > Therapeutic Area
- Neurology (0.67)
- Leisure & Entertainment > Social Events (0.67)
- Technology: