Mind the Motions: Benchmarking Theory-of-Mind in Everyday Body Language
Lee, Seungbeen, Jeong, Jinhong, Kim, Donghyun, Son, Yejin, Yu, Youngjae
–arXiv.org Artificial Intelligence
Our ability to interpret others' mental states through nonverbal cues (NVCs) is fundamental to our survival and social cohesion. While existing Theory of Mind (ToM) benchmarks have primarily focused on false-belief tasks and reasoning with asymmetric information, they overlook other mental states beyond belief and the rich tapestry of human nonverbal communication. We present Motion2Mind, a framework for evaluating the ToM capabilities of machines in interpreting NVCs. Leveraging an expert-curated body-language reference as a proxy knowledge base, we build Motion2Mind, a carefully curated video dataset with fine-grained nonverbal cue annotations paired with manually verified psychological interpretations. It encompasses 222 types of nonverbal cues and 397 mind states. Our evaluation reveals that current AI systems struggle significantly with NVC interpretation, exhibiting not only a substantial performance gap in Detection, as well as patterns of over-interpretation in Explanation compared to human annotators.
arXiv.org Artificial Intelligence
Nov-21-2025
- Country:
- Asia
- Middle East > Jordan (0.04)
- South Korea > Seoul
- Seoul (0.04)
- Europe > Monaco (0.04)
- North America > United States
- Illinois > Cook County
- Chicago (0.04)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- Illinois > Cook County
- South America > Chile
- Asia
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine
- Diagnostic Medicine > Imaging (0.46)
- Therapeutic Area (1.00)
- Information Technology (0.93)
- Health & Medicine
- Technology:
- Information Technology > Artificial Intelligence
- Cognitive Science > Emotion (0.68)
- Machine Learning > Neural Networks (0.69)
- Natural Language > Large Language Model (0.96)
- Representation & Reasoning (1.00)
- Robots (1.00)
- Vision (1.00)
- Information Technology > Artificial Intelligence