Collaborating Authors

Modeling the Dynamics of Nonverbal Behavior on Interpersonal Trust for Human-Robot Interactions

AAAI Conferences

We describe research towards creating a computational model for recognizing interpersonal trust in social interactions. We found that four negative gestural cues— leaning-backward, face-touching, hand-touching, and crossing-arms—are together predictive of lower levels of trust. Three positive gestural cues—leaning- forward, having arms-in-lap, and open-arms—are predictive of higher levels of trust. We train a probabilistic graphical model using natural social interaction data, a “Trust Hidden Markov Model” that incorporates the occurrence of these seven important gestures throughout the social interaction. This Trust HMM predicts with 69.44% accuracy whether an individual is willing to behave cooperatively or uncooperatively with their novel partner; in comparison, a gesture-ignorant model achieves 63.89% accuracy. We attempt to automate this recognition process by detecting those trust-related behaviors through 3D motion capture technology and gesture recognition algorithms. We aim to eventually create a hierarchical system—with low-level gesture recognition for high-level trust recognition—that is capable of predicting whether an individual finds another to be a trustworthy or untrustworthy partner through their non- verbal expressions.

Robot Capability and Intention in Trust-based Decisions across Tasks Artificial Intelligence

--In this paper, we present results from a human-subject study designed to explore two facets of human mental models of robots--inferred capability and intention--and their relationship to overall trust and eventual decisions. In particular, we examine delegation situations characterized by uncertainty, and explore how inferred capability and intention are applied across different tasks. We develop an online survey where human participants decide whether to delegate control to a simulated UA V agent. Our study shows that human estimations of robot capability and intent correlate strongly with overall self-reported trust. However, overall trust is not independently sufficient to determine whether a human will decide to trust (delegate) a given task to a robot. Instead, our study reveals that estimations of robot intention, capability, and overall trust are integrated when deciding to delegate. From a broader perspective, these results suggest that calibrating overall trust alone is insufficient; to make correct decisions, humans need (and use) multifaceted mental models when collaborating with robots across multiple contexts. I NTRODUCTION Trust is a cornerstone of long-lasting collaboration in human teams, and is crucial for human-robot cooperation [1]. For example, human trust in robots influences usage [2], and willingness to accept information or suggestions [3]. Misplaced trust in robots can lead to poor task-allocation and unsatisfactory outcomes.

As AI gains human traits, will it lose human trust?


While design and behavior have always been linked, the connection is gaining a new significance thanks to the next-generation of technologies. Consequently, this next-generation of technologies will be unlike any we have seen so far. They will enable a new era of human augmentation, in which technologies look like us and act like us, often without our input. Human augmentation technologies will be game-changing for companies and their customers. They could open up new ways of engaging consumers -- from conversational interfaces that replace keyboards to digital assistants that autonomously make purchasing decisions -- and create a new generation of empowered "super consumers."

Trust-Aware Decision Making for Human-Robot Collaboration: Model Learning and Planning Artificial Intelligence

Trust in autonomy is essential for effective human-robot collaboration and user adoption of autonomous systems such as robot assistants. This paper introduces a computational model which integrates trust into robot decision-making. Specifically, we learn from data a partially observable Markov decision process (POMDP) with human trust as a latent variable. The trust-POMDP model provides a principled approach for the robot to (i) infer the trust of a human teammate through interaction, (ii) reason about the effect of its own actions on human trust, and (iii) choose actions that maximize team performance over the long term. We validated the model through human subject experiments on a table-clearing task in simulation (201 participants) and with a real robot (20 participants). In our studies, the robot builds human trust by manipulating low-risk objects first. Interestingly, the robot sometimes fails intentionally in order to modulate human trust and achieve the best team performance. These results show that the trust-POMDP calibrates trust to improve human-robot team performance over the long term. Further, they highlight that maximizing trust alone does not always lead to the best performance.