Goto

Collaborating Authors

Modeling Trust in Human-Robot Interaction: A Survey

arXiv.org Artificial Intelligence

As the autonomy and capabilities of robotic systems increase, they are expected to play the role of teammates rather than tools and interact with human collaborators in a more realistic manner, creating a more human-like relationship. Given the impact of trust observed in human-robot interaction (HRI), appropriate trust in robotic collaborators is one of the leading factors influencing the performance of human-robot interaction. Team performance can be diminished if people do not trust robots appropriately by disusing or misusing them based on limited experience. Therefore, trust in HRI needs to be calibrated properly, rather than maximized, to let the formation of an appropriate level of trust in human collaborators. For trust calibration in HRI, trust needs to be modeled first. There are many reviews on factors affecting trust in HRI, however, as there are no reviews concentrated on different trust models, in this paper, we review different techniques and methods for trust modeling in HRI. We also present a list of potential directions for further research and some challenges that need to be addressed in future work on human-robot trust modeling.


Robot Capability and Intention in Trust-based Decisions across Tasks

arXiv.org Artificial Intelligence

--In this paper, we present results from a human-subject study designed to explore two facets of human mental models of robots--inferred capability and intention--and their relationship to overall trust and eventual decisions. In particular, we examine delegation situations characterized by uncertainty, and explore how inferred capability and intention are applied across different tasks. We develop an online survey where human participants decide whether to delegate control to a simulated UA V agent. Our study shows that human estimations of robot capability and intent correlate strongly with overall self-reported trust. However, overall trust is not independently sufficient to determine whether a human will decide to trust (delegate) a given task to a robot. Instead, our study reveals that estimations of robot intention, capability, and overall trust are integrated when deciding to delegate. From a broader perspective, these results suggest that calibrating overall trust alone is insufficient; to make correct decisions, humans need (and use) multifaceted mental models when collaborating with robots across multiple contexts. I NTRODUCTION Trust is a cornerstone of long-lasting collaboration in human teams, and is crucial for human-robot cooperation [1]. For example, human trust in robots influences usage [2], and willingness to accept information or suggestions [3]. Misplaced trust in robots can lead to poor task-allocation and unsatisfactory outcomes.


Warmth and Competence to Predict Human Preference of Robot Behavior in Physical Human-Robot Interaction

arXiv.org Artificial Intelligence

In cognitive science and social psychology Warmth and Competence are considered fundamental dimensions of social There is a large body of work evaluating the perception of cognition, i.e., the social judgment of our peers [1], and interaction with robots. In this paper we are interested [7]. Fiske et al. provide evidence that those dimensions are in understanding which metrics indicate human preferences, universal and reliable for social judgment across stimuli, cultures i.e., which robot a person would choose to interact with and time [1]. People perceived as warm and competent again, if given a choice. Agreeing upon a metric for this elicit uniformly positive emotions [1], are in general more in human-robot interaction (HRI) would provide important favored, and experience more positive interaction with their benefits [2], but raises the question which metric we should peers [6]. The opposite is true for people scoring low on use? The human engagement in an interaction could serve these dimensions, meaning they experience more negative as an indicator for their preference.


Impact of Explanation on Trust of a Novel Mobile Robot

arXiv.org Artificial Intelligence

One challenge with introducing robots into novel environments is misalignment between supervisor expectations and reality, which can greatly affect a user's trust and continued use of the robot. We performed an experiment to test whether the presence of an explanation of expected robot behavior affected a supervisor's trust in an autonomous robot. We measured trust both subjectively through surveys and objectively through a dual-task experiment design to capture supervisors' neglect tolerance (i.e., their willingness to perform their own task while the robot is acting autonomously). Our objective results show that explanations can help counteract the novelty effect of seeing a new robot perform in an unknown environment. Participants who received an explanation of the robot's behavior were more likely to focus on their own task at the risk of neglecting their robot supervision task during the first trials of the robot's behavior compared to those who did not receive an explanation. However, this effect diminished after seeing multiple trials, and participants who received explanations were equally trusting of the robot's behavior as those who did not receive explanations. Interestingly, participants were not able to identify their own changes in trust through their survey responses, demonstrating that the dual-task design measured subtler changes in a supervisor's trust.


PAS influences your Trust in Technology

#artificialintelligence

Trusting AI too much can turn out to be fatal, is the headline of the Financial Times review of a car crash in 2018, which took Walter H.'s life. Walter was driving a Tesla SUV (Tesla Model X P100D) on autopilot when the car hit a barrier and then got struck by two other vehicles. The National Transportation Safety Board analyzed the case: Next to various environmental and technical reasons, the driver's over-reliance on the autopilot was one factor that presumably caused the accident. Before the crash, the 38-year old Apple engineer was immersed in a video game and trusted the autopilot to bring him safely to his next destination, which, unfortunately, was never reached. Disturbing stories of humans over-relying on technology with fatal consequences are not isolated cases, as it turns out. There is even a term for this in the Death Valley National Park: Death by GPS. Park rangers have to witness death by GPS rather frequently. The GPS gives weird directions (often technically correct, e.g., the shortest path goes over a mountaintop or through a river), and people follow the directions unquestioningly, get lost and die in the human-unfriendly conditions of the national park.