Goto

Collaborating Authors

Modeling Trust in Human-Robot Interaction: A Survey

arXiv.org Artificial Intelligence

As the autonomy and capabilities of robotic systems increase, they are expected to play the role of teammates rather than tools and interact with human collaborators in a more realistic manner, creating a more human-like relationship. Given the impact of trust observed in human-robot interaction (HRI), appropriate trust in robotic collaborators is one of the leading factors influencing the performance of human-robot interaction. Team performance can be diminished if people do not trust robots appropriately by disusing or misusing them based on limited experience. Therefore, trust in HRI needs to be calibrated properly, rather than maximized, to let the formation of an appropriate level of trust in human collaborators. For trust calibration in HRI, trust needs to be modeled first. There are many reviews on factors affecting trust in HRI, however, as there are no reviews concentrated on different trust models, in this paper, we review different techniques and methods for trust modeling in HRI. We also present a list of potential directions for further research and some challenges that need to be addressed in future work on human-robot trust modeling.


How Could We Model Cohesiveness in Team Social Fabric in Human-Robot Teams Performing Under Stress?

AAAI Conferences

The paper discusses how a human-robot team can remain “cohesive” while performing under stress. By cohesive the paper understands the ability of the team to operate effectively, with individual members being interdependent-yet-autonomous in carrying out tasks. For a human-robot team, we argue that this requires robots to (1) have an adequate sense of that interde- pendency in terms of the social dynamics within the team, and to (2) maintain transparency towards the human team members in terms of what it is doing, why, and to what extent it can achieve its (possibly jointly agreed upon) goals. The paper re- ports of recent field experience showing that failure in trans- parency results in reduced acceptability of robot autonomous behavior by the human team members. This reduction in acceptability can have two negative impacts on cohesiveness: Humans and robots fail to maintain common ground, and as a result they fail to maintain trust.


Using Doctrines for Human-Robot Collaboration to Guide Ethical Behavior

AAAI Conferences

In this paper, we consider the issue of guiding ethical behavior in human-robot teams from a systemic viewpoint. Considering a team as a sociotechnical complex, we look at how responsibility for actions can arise through the interaction between the different actors in the team while playing specific roles. We define the notions of role, discuss how they establish a social network, and then use logical notions of multi-agent trust to formalize responsibility as accountability against capabilities that are invoked during collaboration.


Shared Awareness, Autonomy and Trust in Human-Robot Teamwork

AAAI Conferences

Teamwork requires mutual trust among team members. Establishing and maintaining trust depends upon alignment of mental models, an aspect of shared awareness. We present a theory of how maintenance of model alignment is integral to fluid changes in relative control authority (i.e., adaptive autonomy) in human-robot teamwork.


Trust as Extended Control: Active Inference and User Feedback During Human-Robot Collaboration

arXiv.org Artificial Intelligence

To interact seamlessly with robots, users must infer the causes of a robot's behavior and be confident about that inference. Hence, trust is a necessary condition for human-robot collaboration (HRC). Despite its crucial role, it is largely unknown how trust emerges, develops, and supports human interactions with nonhuman artefacts. Here, we review the literature on trust, human-robot interaction, human-robot collaboration, and human interaction at large. Early models of trust suggest that trust entails a trade-off between benevolence and competence, while studies of human-to-human interaction emphasize the role of shared behavior and mutual knowledge in the gradual building of trust. We then introduce a model of trust as an agent's best explanation for reliable sensory exchange with an extended motor plant or partner. This model is based on the cognitive neuroscience of active inference and suggests that, in the context of HRC, trust can be cast in terms of virtual control over an artificial agent. In this setting, interactive feedback becomes a necessary component of the trustor's perception-action cycle. The resulting model has important implications for understanding human-robot interaction and collaboration, as it allows the traditional determinants of human trust to be defined in terms of active inference, information exchange and empowerment. Furthermore, this model suggests that boredom and surprise may be used as markers for under and over-reliance on the system. Finally, we examine the role of shared behavior in the genesis of trust, especially in the context of dyadic collaboration, suggesting important consequences for the acceptability and design of human-robot collaborative systems.