Understanding Epistemic Language with a Bayesian Theory of Mind
Ying, Lance, Zhi-Xuan, Tan, Wong, Lionel, Mansinghka, Vikash, Tenenbaum, Joshua B.
–arXiv.org Artificial Intelligence
How do people understand and evaluate claims about others' beliefs, even though these beliefs cannot be directly observed? In this paper, we introduce a cognitive model of epistemic language interpretation, grounded in Bayesian inferences about other agents' goals, beliefs, and intentions: a language-augmented Bayesian theory-of-mind (LaBToM). By translating natural language into an epistemic ``language-of-thought'', then evaluating these translations against the inferences produced by inverting a probabilistic generative model of rational action and perception, LaBToM captures graded plausibility judgments about epistemic claims. We validate our model in an experiment where participants watch an agent navigate a maze to find keys hidden in boxes needed to reach their goal, then rate sentences about the agent's beliefs. In contrast with multimodal LLMs (GPT-4o, Gemini Pro) and ablated models, our model correlates highly with human judgments for a wide range of expressions, including modal language, uncertainty expressions, knowledge claims, likelihood comparisons, and attributions of false belief.
arXiv.org Artificial Intelligence
Aug-21-2024
- Country:
- Europe > United Kingdom
- England > Oxfordshire > Oxford (0.04)
- North America > United States
- Massachusetts > Middlesex County > Cambridge (0.04)
- South America > Colombia
- Bogotá D.C. > Bogotá (0.04)
- Europe > United Kingdom
- Genre:
- Research Report (0.40)
- Industry:
- Leisure & Entertainment (0.46)
- Technology: