The Air Force Wants You to Trust Robots--Should You?

AITopics Original Links

A British fighter jet was returning to its base in Kuwait after a mission on the third day of the 2003 Iraq War when a U.S. anti-missile system spotted it, identified it as an enemy missile, and fired. The two men in the plane were both killed. A week and a half later, the same system--the vaunted Patriot--made the same mistake. This time, it was an American plane downed, and an American pilot killed. The missile battery that targeted the two jets was almost entirely automated.

Building Appropriate Trust in Human-Robot Teams

AAAI Conferences

Future robotic systems are expected to transition from tools to teammates , characterized by increasingly autonomous, intelligent robots interacting with humans in a more naturalistic manner, approaching a relationship more akin to human–human teamwork. Given the impact of trust observed in other systems, trust in the robot team member will likely be critical to effective and safe performance. Our thesis for this paper is that trust in a robot team member must be appropriately calibrated rather than simply maximized.  We describe how the human team member’s understanding of the system contributes to trust in human-robot teaming, by evoking mental model theory. We discuss how mental models are related to physical and behavioral characteristics of the robot, on the one hand, and affective and behavioral outcomes, such as trust and system use/disuse/misuse, on the other.  We expand upon our discussion by providing recommendations for best practices in human-robot team research and design and other systems using artificial intelligence.

Love Your Robot? You're Not Alone.


The interaction between humans and robots has been a common theme in science fiction and popular culture, but robotic machines don't have to be humanoid or even have much personality for people to develop a relationship with them.

Being Transparent about Transparency: A Model for Human-Robot Interaction

AAAI Conferences

The current paper discusses the concept of human-robot interaction through the lens of a model depicting the key elements of robot-to-human and robot-of-human transparency. Robot-to-human factors represent information that the system (which includes the robot but is broader than just the robot) needs to present to users before, during, or after interactions. Robot-of-human variables are factors relating the human (or the interactions with the human; i.e., teamwork) that the system needs to communicate an awareness of to the users. The paper closes with some potentials design implications for the various transparency domains to include: training and the human-robot interface (including social design, feedback, and display design).

Adapting Autonomous Behavior Based on an Estimate of an Operator's Trust

AAAI Conferences

Robots can be added to human teams to provide improved capabilities or to perform tasks that humans are unsuited for. However, in order to get the full benefit of the robots the human teammates must use the robots in the appropriate situations. If the humans do not trust the robots, they may underutilize them or disuse them which could result in a failure to achieve team goals. We present a robot that is able to estimate its trustworthiness and adapt its behavior accordingly. This technique helps the robot remain trustworthy even when changes in context, task or teammates are possible.