Goto

Collaborating Authors

Robot Capability and Intention in Trust-based Decisions across Tasks

arXiv.org Artificial Intelligence

--In this paper, we present results from a human-subject study designed to explore two facets of human mental models of robots--inferred capability and intention--and their relationship to overall trust and eventual decisions. In particular, we examine delegation situations characterized by uncertainty, and explore how inferred capability and intention are applied across different tasks. We develop an online survey where human participants decide whether to delegate control to a simulated UA V agent. Our study shows that human estimations of robot capability and intent correlate strongly with overall self-reported trust. However, overall trust is not independently sufficient to determine whether a human will decide to trust (delegate) a given task to a robot. Instead, our study reveals that estimations of robot intention, capability, and overall trust are integrated when deciding to delegate. From a broader perspective, these results suggest that calibrating overall trust alone is insufficient; to make correct decisions, humans need (and use) multifaceted mental models when collaborating with robots across multiple contexts. I NTRODUCTION Trust is a cornerstone of long-lasting collaboration in human teams, and is crucial for human-robot cooperation [1]. For example, human trust in robots influences usage [2], and willingness to accept information or suggestions [3]. Misplaced trust in robots can lead to poor task-allocation and unsatisfactory outcomes.


Trust Dynamics in Human Autonomous Vehicle Interaction: A Review of Trust Models

AAAI Conferences

Several ongoing research projects in Human autonomous car interactions are addressing the problem of safe co-existence for human and robot drivers on road. Automation in cars can vary across a continuum of levels at which it can replace manual tasks. Social relationships like anthropomorphic behavior of owners towards their cars is also expected to vary according to this spectrum of autonomous decision making capacity. Some researchers have proposed a joint cognitive model of a human-car collaboration that can make the best of the respective strengths of humans and machines. For a successful collaboration, it is important that the members of this human - car team develop, maintain and update each others behavioral models. We consider mutual trust as an integral part of these models. In this paper, we present a review of the quantitative models of trust in automation. We found that only a few models of humans’ trust on automation exist in literature that account for the dynamic nature of trust and may be leveraged in human car interaction. However, these models do not support mutual trust. Our review suggests that there is significant scope for future research in the domain of mutual trust modeling for human car interaction, especially, when considered over the lifetime of the vehicle. Hardware and computational framework (for sensing, data aggregation, processing and modeling) must be developed to support these adaptive models over the operational phase of autonomous vehicles. In order to further research in mutual human - automation trust, we propose a framework for integrating Mutual Trust compu- tation into standard Human - Robot Interaction research platforms. This framework includes User trust and Agent trust, the two fundamental components of Mutual trust. It allows us to harness multi-modal sensor data from the car as well as from the user’s wearable or handheld device. The proposed framework provides access to prior trust aggregate and other cars’ experience data from the Cloud and to feature primitives like gaze, facial expression, etc. from a standard low-cost Human - Robot Interaction platform.


Adapting Autonomous Behavior Based on an Estimate of an Operator's Trust

AAAI Conferences

Robots can be added to human teams to provide improved capabilities or to perform tasks that humans are unsuited for. However, in order to get the full benefit of the robots the human teammates must use the robots in the appropriate situations. If the humans do not trust the robots, they may underutilize them or disuse them which could result in a failure to achieve team goals. We present a robot that is able to estimate its trustworthiness and adapt its behavior accordingly. This technique helps the robot remain trustworthy even when changes in context, task or teammates are possible.


Trust and Cognitive Load During Human-Robot Interaction

arXiv.org Artificial Intelligence

This paper presents an exploratory study to understand the relationship between a humans' cognitive load, trust, and anthropomorphism during human-robot interaction. To understand the relationship, we created a \say{Matching the Pair} game that participants could play collaboratively with one of two robot types, Husky or Pepper. The goal was to understand if humans would trust the robot as a teammate while being in the game-playing situation that demanded a high level of cognitive load. Using a humanoid vs. a technical robot, we also investigated the impact of physical anthropomorphism and we furthermore tested the impact of robot error rate on subsequent judgments and behavior. Our results showed that there was an inversely proportional relationship between trust and cognitive load, suggesting that as the amount of cognitive load increased in the participants, their ratings of trust decreased. We also found a triple interaction impact between robot-type, error-rate and participant's ratings of trust. We found that participants perceived Pepper to be more trustworthy in comparison with the Husky robot after playing the game with both robots under high error-rate condition. On the contrary, Husky was perceived as more trustworthy than Pepper when it was depicted as featuring a low error-rate. Our results are interesting and call further investigation of the impact of physical anthropomorphism in combination with variable error-rates of the robot.


Shared Awareness, Autonomy and Trust in Human-Robot Teamwork

AAAI Conferences

Teamwork requires mutual trust among team members. Establishing and maintaining trust depends upon alignment of mental models, an aspect of shared awareness. We present a theory of how maintenance of model alignment is integral to fluid changes in relative control authority (i.e., adaptive autonomy) in human-robot teamwork.