Goto

Collaborating Authors

 Verbrugge, Rineke


Towards properly implementing Theory of Mind in AI systems: An account of four misconceptions

arXiv.org Artificial Intelligence

The search for effective collaboration between humans and computer systems is one of the biggest challenges in Artificial Intelligence. One of the more effective mechanisms that humans use to coordinate with one another is theory of mind (ToM). ToM can be described as the ability to `take someone else's perspective and make estimations of their beliefs, desires and intentions, in order to make sense of their behaviour and attitudes towards the world'. If leveraged properly, this skill can be very useful in Human-AI collaboration. This introduces the question how we implement ToM when building an AI system. Humans and AI Systems work quite differently, and ToM is a multifaceted concept, each facet rooted in different research traditions across the cognitive and developmental sciences. We observe that researchers from artificial intelligence and the computing sciences, ourselves included, often have difficulties finding their way in the ToM literature. In this paper, we identify four common misconceptions around ToM that we believe should be taken into account when developing an AI system. We have hyperbolised these misconceptions for the sake of the argument, but add nuance in their discussion. The misconceptions we discuss are: (1) "Humans Use a ToM Module, So AI Systems Should As Well". (2) "Every Social Interaction Requires (Advanced) ToM". (3) "All ToM is the Same". (4) "Current Systems Already Have ToM". After discussing the misconception, we end each section by providing tentative guidelines on how the misconception can be overcome.


Proceedings Nineteenth conference on Theoretical Aspects of Rationality and Knowledge

arXiv.org Artificial Intelligence

The TARK conference (Theoretical Aspects of Rationality and Knowledge) is a conference that aims to bring together researchers from a wide variety of fields, including computer science, artificial intelligence, game theory, decision theory, philosophy, logic, linguistics, and cognitive science. Its goal is to further our understanding of interdisciplinary issues involving reasoning about rationality and knowledge. Previous conferences have been held biennially around the world since 1986, on the initiative of Joe Halpern (Cornell University). Topics of interest include, but are not limited to, semantic models for knowledge, belief, awareness and uncertainty, bounded rationality and resource-bounded reasoning, commonsense epistemic reasoning, epistemic logic, epistemic game theory, knowledge and action, applications of reasoning about knowledge and other mental states, belief revision, computational social choice, algorithmic game theory, and foundations of multi-agent systems. Information about TARK, including conference proceedings, is available at http://www.tark.org/ These proceedings contain the papers that have been accepted for presentation at the Nineteenth Conference on Theoretical Aspects of Rationality and Knowledge (TARK 2023), held between June 28 and June 30, 2023, at the University of Oxford, United Kingdom. The conference website can be found at https://sites.google.com/view/tark-2023


Strong Admissibility for Abstract Dialectical Frameworks

arXiv.org Artificial Intelligence

Abstract dialectical frameworks (ADFs) have been introduced as a formalism for modeling and evaluating argumentation allowing general logical satisfaction conditions. Different criteria used to settle the acceptance of arguments are called semantics. Semantics of ADFs have so far mainly been defined based on the concept of admissibility. However, the notion of strongly admissible semantics studied for abstract argumentation frameworks has not yet been introduced for ADFs. In the current work we present the concept of strong admissibility of interpretations for ADFs. Further, we show that strongly admissible interpretations of ADFs form a lattice with the grounded interpretation as top element.


Logic in the Lab

arXiv.org Artificial Intelligence

This file summarizes the plenary talk on laboratory experiments on logic at the TARK 2013 - 14th Conference on Theoretical Aspects of Rationality and Knowledge.


Modeling Deliberation in Teamwork

AAAI Conferences

Cooperation in multiagent systems essentially hinges on appropriate communication. This paper shows how to model communication in teamwork within TeamLog, the first multi-modal framework wholly capturing a methodology for working together. Taking off from the dialogue theory of Walton and Krabbe, the paper focuses on deliberation, the main type of dialogue during team planning. We provide a four-stage schema of deliberation dialogue along with semantics of adequate speech acts, filling the gap in logical modeling of communication during planning.