Malle, Bertram F.
Measuring Human-Robot Trust with the MDMT (Multi-Dimensional Measure of Trust)
Malle, Bertram F., Ullman, Daniel
We describe the steps of developing the MDMT (Multi-Dimensional Measure of Trust), an intuitive self-report measure of perceived trustworthiness of various agents (human, robot, animal). We summarize the evidence that led to the original four-dimensional form (v1) and to the most recent five-dimensional form (v2). We examine the measure's strengths and limitations and point to further necessary validations.
How People Explain Action (and Autonomous Intelligent Systems Should Too)
Graaf, Maartje M. A. de (Brown University) | Malle, Bertram F. (Brown University)
To make Autonomous Intelligent Systems (AIS), such as virtual agents and embodied robots, “explainable” we need to understand how people respond to such systems and what expectations they have of them. Our thesis is that people will regard most AIS as intentional agents and apply the conceptual framework and psychological mechanisms of human behavior explanation to them. We present a well-supported theory of how people explain human behavior and sketch what it would take to implement the underlying framework of explanation in AIS. The benefits will be considerable: When an AIS is able to explain its behavior in ways that people find comprehensible, people are more likely to form correct mental models of such a system and calibrate their trust in the system.
Inevitable Psychological Mechanisms Triggered by Robot Appearance: Morality Included?
Malle, Bertram F. (Brown University) | Scheutz, Matthias (Tufts University)
Certain stimuli in the environment reliably, and perhaps inevitably, trigger human cognitive and behavioral responses. We suggest that the presence of such “trigger stimuli” in modern robots can have disconcerting consequences. We provide one new example of such consequences: a reversal of a pattern of moral judgments people make about robots, depending on whether they view a “mechanical” or a “humanoid” robot.