Doing Things with Words: Rethinking Theory of Mind Simulation in Large Language Models
Lombardi, Agnese, Lenci, Alessandro
–arXiv.org Artificial Intelligence
Language is fundamental to human cooperation, facilitating not only the exchange of information but also the coordination of actions through shared interpretations of situational contexts. This study explores whether the Generative Agent-Based Model (GABM) Concordia can effectively model Theory of Mind (ToM) within simulated real-world environments. Specifically, we assess whether this framework successfully simulates ToM abilities and whether GPT-4 can perform tasks by making genuine inferences from social context, rather than relying on linguistic memorization. Our findings reveal a critical limitation: GPT-4 frequently fails to select actions based on belief attribution, suggesting that apparent ToM-like abilities observed in previous studies may stem from shallow statistical associations rather than true reasoning. Additionally, the model struggles to generate coherent causal effects from agent actions, exposing difficulties in processing complex social interactions. These results challenge current statements about emergent ToM-like capabilities in LLMs and highlight the need for more rigorous, action-based evaluation frameworks.
arXiv.org Artificial Intelligence
Oct-16-2025
- Country:
- Europe > Italy
- North America > United States
- Florida > Miami-Dade County
- Miami (0.04)
- New York > New York County
- New York City (0.04)
- Florida > Miami-Dade County
- Genre:
- Research Report > New Finding (0.66)
- Technology: