Goto

Collaborating Authors

Trust and Cooperation in Human-Robot Decision Making

AAAI Conferences

Trust plays a key role in social interactions, particularly when the decisions we make depend on the people we face. In this paper, we use game theory to explore whether a person’s decisions are influenced by the type of agent they interact with:human or robot. By adopting a coin entrustment game, we quantitatively measure trust and cooperation to see if such phenomena emerge differently when a person believes they are playing a robot rather than another human. We found that while people cooperate with other humans and robots at a similar rate, they grow to trust robots more completely than humans. As a possible explanation for these differences, our survey results suggest that participants perceive humans as having faculty for feelings and sympathy, whereas they perceive robots as being more precise and reliable.


Modeling the Dynamics of Nonverbal Behavior on Interpersonal Trust for Human-Robot Interactions

AAAI Conferences

We describe research towards creating a computational model for recognizing interpersonal trust in social interactions. We found that four negative gestural cues— leaning-backward, face-touching, hand-touching, and crossing-arms—are together predictive of lower levels of trust. Three positive gestural cues—leaning- forward, having arms-in-lap, and open-arms—are predictive of higher levels of trust. We train a probabilistic graphical model using natural social interaction data, a “Trust Hidden Markov Model” that incorporates the occurrence of these seven important gestures throughout the social interaction. This Trust HMM predicts with 69.44% accuracy whether an individual is willing to behave cooperatively or uncooperatively with their novel partner; in comparison, a gesture-ignorant model achieves 63.89% accuracy. We attempt to automate this recognition process by detecting those trust-related behaviors through 3D motion capture technology and gesture recognition algorithms. We aim to eventually create a hierarchical system—with low-level gesture recognition for high-level trust recognition—that is capable of predicting whether an individual finds another to be a trustworthy or untrustworthy partner through their non- verbal expressions.


Building Appropriate Trust in Human-Robot Teams

AAAI Conferences

Future robotic systems are expected to transition from tools to teammates , characterized by increasingly autonomous, intelligent robots interacting with humans in a more naturalistic manner, approaching a relationship more akin to human–human teamwork. Given the impact of trust observed in other systems, trust in the robot team member will likely be critical to effective and safe performance. Our thesis for this paper is that trust in a robot team member must be appropriately calibrated rather than simply maximized.  We describe how the human team member’s understanding of the system contributes to trust in human-robot teaming, by evoking mental model theory. We discuss how mental models are related to physical and behavioral characteristics of the robot, on the one hand, and affective and behavioral outcomes, such as trust and system use/disuse/misuse, on the other.  We expand upon our discussion by providing recommendations for best practices in human-robot team research and design and other systems using artificial intelligence.


Why This Robot Ethicist Trusts Technology More Than Humans

#artificialintelligence

MIT's Kate Darling, who writes the rules of human-robot interaction, says an AI-enabled apocalypse should be the least of our concerns. A s a law student in Switzerland, Kate Darling's interest in robots was just a hobby. She had purchased a PLEO robot dinosaur that was designed to respond to human contact emotionally and act independently. "It really struck me that I responded to the cues the robot was giving me, even though I knew exactly how the toy worked," Darling says. "I knew where all the motors were and how it worked, and why it would cry when you held it up by the tail, but I was just so compelled to comfort it and make it stop crying."


Trust and Cognitive Load During Human-Robot Interaction

arXiv.org Artificial Intelligence

This paper presents an exploratory study to understand the relationship between a humans' cognitive load, trust, and anthropomorphism during human-robot interaction. To understand the relationship, we created a \say{Matching the Pair} game that participants could play collaboratively with one of two robot types, Husky or Pepper. The goal was to understand if humans would trust the robot as a teammate while being in the game-playing situation that demanded a high level of cognitive load. Using a humanoid vs. a technical robot, we also investigated the impact of physical anthropomorphism and we furthermore tested the impact of robot error rate on subsequent judgments and behavior. Our results showed that there was an inversely proportional relationship between trust and cognitive load, suggesting that as the amount of cognitive load increased in the participants, their ratings of trust decreased. We also found a triple interaction impact between robot-type, error-rate and participant's ratings of trust. We found that participants perceived Pepper to be more trustworthy in comparison with the Husky robot after playing the game with both robots under high error-rate condition. On the contrary, Husky was perceived as more trustworthy than Pepper when it was depicted as featuring a low error-rate. Our results are interesting and call further investigation of the impact of physical anthropomorphism in combination with variable error-rates of the robot.