Goto

Collaborating Authors

"Dave...I can assure you...that it's going to be all right..." -- A definition, case for, and survey of algorithmic assurances in human-autonomy trust relationships

arXiv.org Machine Learning

As technology becomes more advanced, those who design, use and are otherwise affected by it want to know that it will perform correctly, and understand why it does what it does, and how to use it appropriately. In essence they want to be able to trust the systems that are being designed. In this survey we present assurances that are the method by which users can understand how to trust autonomous systems. Trust between humans and autonomy is reviewed, and the implications for the design of assurances are highlighted. A survey of existing research related to assurances is presented. Much of the surveyed research originates from fields such as interpretable, comprehensible, transparent, and explainable machine learning, as well as human-computer interaction, human-robot interaction, and e-commerce. Several key ideas are extracted from this work in order to refine the definition of assurances. The design of assurances is found to be highly dependent not only on the capabilities of the autonomous system, but on the characteristics of the human user, and the appropriate trust-related behaviors. Several directions for future research are identified and discussed.


Machine Self-Confidence in Autonomous Systems via Meta-Analysis of Decision Processes

arXiv.org Artificial Intelligence

Algorithmic assurances from advanced autonomous systems assist human users in understanding, trusting, and using such systems appropriately. Designing these systems with the capacity of assessing their own capabilities is one approach to creating an algorithmic assurance. The idea of `machine self-confidence' is introduced for autonomous systems. Using a factorization based framework for self-confidence assessment, one component of self-confidence, called `solver-quality', is discussed in the context of Markov decision processes for autonomous systems. Markov decision processes underlie much of the theory of reinforcement learning, and are commonly used for planning and decision making under uncertainty in robotics and autonomous systems. A `solver quality' metric is formally defined in the context of decision making algorithms based on Markov decision processes. A method for assessing solver quality is then derived, drawing inspiration from empirical hardness models. Finally, numerical experiments for an unmanned autonomous vehicle navigation problem under different solver, parameter, and environment conditions indicate that the self-confidence metric exhibits the desired properties. Discussion of results, and avenues for future investigation are included.


Modeling Trust in Human-Robot Interaction: A Survey

arXiv.org Artificial Intelligence

As the autonomy and capabilities of robotic systems increase, they are expected to play the role of teammates rather than tools and interact with human collaborators in a more realistic manner, creating a more human-like relationship. Given the impact of trust observed in human-robot interaction (HRI), appropriate trust in robotic collaborators is one of the leading factors influencing the performance of human-robot interaction. Team performance can be diminished if people do not trust robots appropriately by disusing or misusing them based on limited experience. Therefore, trust in HRI needs to be calibrated properly, rather than maximized, to let the formation of an appropriate level of trust in human collaborators. For trust calibration in HRI, trust needs to be modeled first. There are many reviews on factors affecting trust in HRI, however, as there are no reviews concentrated on different trust models, in this paper, we review different techniques and methods for trust modeling in HRI. We also present a list of potential directions for further research and some challenges that need to be addressed in future work on human-robot trust modeling.


Trust Dynamics in Human Autonomous Vehicle Interaction: A Review of Trust Models

AAAI Conferences

Several ongoing research projects in Human autonomous car interactions are addressing the problem of safe co-existence for human and robot drivers on road. Automation in cars can vary across a continuum of levels at which it can replace manual tasks. Social relationships like anthropomorphic behavior of owners towards their cars is also expected to vary according to this spectrum of autonomous decision making capacity. Some researchers have proposed a joint cognitive model of a human-car collaboration that can make the best of the respective strengths of humans and machines. For a successful collaboration, it is important that the members of this human - car team develop, maintain and update each others behavioral models. We consider mutual trust as an integral part of these models. In this paper, we present a review of the quantitative models of trust in automation. We found that only a few models of humans’ trust on automation exist in literature that account for the dynamic nature of trust and may be leveraged in human car interaction. However, these models do not support mutual trust. Our review suggests that there is significant scope for future research in the domain of mutual trust modeling for human car interaction, especially, when considered over the lifetime of the vehicle. Hardware and computational framework (for sensing, data aggregation, processing and modeling) must be developed to support these adaptive models over the operational phase of autonomous vehicles. In order to further research in mutual human - automation trust, we propose a framework for integrating Mutual Trust compu- tation into standard Human - Robot Interaction research platforms. This framework includes User trust and Agent trust, the two fundamental components of Mutual trust. It allows us to harness multi-modal sensor data from the car as well as from the user’s wearable or handheld device. The proposed framework provides access to prior trust aggregate and other cars’ experience data from the Cloud and to feature primitives like gaze, facial expression, etc. from a standard low-cost Human - Robot Interaction platform.


Explainability in Human-Agent Systems

arXiv.org Artificial Intelligence

This paper presents a taxonomy of explainability in Human-Agent Systems. We consider fundamental questions about the Why, Who, What, When and How of explainability. First, we define explainability, and its relationship to the related terms of interpretability, transparency, explicitness, and faithfulness. These definitions allow us to answer why explainability is needed in the system, whom it is geared to and what explanations can be generated to meet this need. We then consider when the user should be presented with this information. Last, we consider how objective and subjective measures can be used to evaluate the entire system. This last question is the most encompassing as it will need to evaluate all other issues regarding explainability.