Goto

Collaborating Authors

Modeling Trust in Human-Robot Interaction: A Survey

arXiv.org Artificial Intelligence

As the autonomy and capabilities of robotic systems increase, they are expected to play the role of teammates rather than tools and interact with human collaborators in a more realistic manner, creating a more human-like relationship. Given the impact of trust observed in human-robot interaction (HRI), appropriate trust in robotic collaborators is one of the leading factors influencing the performance of human-robot interaction. Team performance can be diminished if people do not trust robots appropriately by disusing or misusing them based on limited experience. Therefore, trust in HRI needs to be calibrated properly, rather than maximized, to let the formation of an appropriate level of trust in human collaborators. For trust calibration in HRI, trust needs to be modeled first. There are many reviews on factors affecting trust in HRI, however, as there are no reviews concentrated on different trust models, in this paper, we review different techniques and methods for trust modeling in HRI. We also present a list of potential directions for further research and some challenges that need to be addressed in future work on human-robot trust modeling.


Insights into Fairness through Trust: Multi-scale Trust Quantification for Financial Deep Learning

arXiv.org Artificial Intelligence

The success of deep learning in recent years have led to a significant increase in interest and prevalence for its adoption to tackle financial services tasks. One particular question that often arises as a barrier to adopting deep learning for financial services is whether the developed financial deep learning models are fair in their predictions, particularly in light of strong governance and regulatory compliance requirements in the financial services industry. A fundamental aspect of fairness that has not been explored in financial deep learning is the concept of trust, whose variations may point to an egocentric view of fairness and thus provide insights into the fairness of models. In this study we explore the feasibility and utility of a multi-scale trust quantification strategy to gain insights into the fairness of a financial deep learning model, particularly under different scenarios at different scales. More specifically, we conduct multi-scale trust quantification on a deep neural network for the purpose of credit card default prediction to study: 1) the overall trustworthiness of the model 2) the trust level under all possible prediction-truth relationships, 3) the trust level across the spectrum of possible predictions, 4) the trust level across different demographic groups (e.g., age, gender, and education), and 5) distribution of overall trust for an individual prediction scenario. The insights for this proof-of-concept study demonstrate that such a multi-scale trust quantification strategy may be helpful for data scientists and regulators in financial services as part of the verification and certification of financial deep learning solutions to gain insights into fairness and trust of these solutions.


Trust-Aware Decision Making for Human-Robot Collaboration: Model Learning and Planning

arXiv.org Artificial Intelligence

Trust in autonomy is essential for effective human-robot collaboration and user adoption of autonomous systems such as robot assistants. This paper introduces a computational model which integrates trust into robot decision-making. Specifically, we learn from data a partially observable Markov decision process (POMDP) with human trust as a latent variable. The trust-POMDP model provides a principled approach for the robot to (i) infer the trust of a human teammate through interaction, (ii) reason about the effect of its own actions on human trust, and (iii) choose actions that maximize team performance over the long term. We validated the model through human subject experiments on a table-clearing task in simulation (201 participants) and with a real robot (20 participants). In our studies, the robot builds human trust by manipulating low-risk objects first. Interestingly, the robot sometimes fails intentionally in order to modulate human trust and achieve the best team performance. These results show that the trust-POMDP calibrates trust to improve human-robot team performance over the long term. Further, they highlight that maximizing trust alone does not always lead to the best performance.


Trust Prediction with Propagation and Similarity Regularization

AAAI Conferences

Online social networks have been used for a variety of rich activities in recent years, such as investigating potential employees and seeking recommendations of high quality services and service providers. In such activities, trust is one of the most critical factors for the decision-making of users. In the literature, the state-of-the-art trust prediction approaches focus on either dispositional trust tendency and propagated trust of the pair-wise trust relationships along a path or the similarity of trust rating values. However, there are other influential factors that should be taken into account, such as the similarity of the trust rating distributions. In addition, tendency, propagated trust and similarity are of different types, as either personal properties or interpersonal properties. But the difference has been neglected in existing models. Therefore, in trust prediction, it is necessary to take all the above factors into consideration in modeling, and process them separately and differently. In this paper we propose a new trust prediction model based on trust decomposition and matrix factorization, considering all the above influential factors and differentiating both personal and interpersonal properties. In this model, we first decompose trust into trust tendency and tendency-reduced trust. Then, based on tendency-reduced trust ratings, matrix factorization with a regularization term is leveraged to predict the tendency-reduced values of missing trust ratings, incorporating both propagated trust and the similarity of users' rating habits. In the end, the missing trust ratings are composed with predicted tendency-reduced values and trust tendency values. Experiments conducted on a real-world dataset illustrate significant improvement delivered by our approach in trust prediction accuracy over the state-of-the-art approaches.


Trust Dynamics in Human Autonomous Vehicle Interaction: A Review of Trust Models

AAAI Conferences

Several ongoing research projects in Human autonomous car interactions are addressing the problem of safe co-existence for human and robot drivers on road. Automation in cars can vary across a continuum of levels at which it can replace manual tasks. Social relationships like anthropomorphic behavior of owners towards their cars is also expected to vary according to this spectrum of autonomous decision making capacity. Some researchers have proposed a joint cognitive model of a human-car collaboration that can make the best of the respective strengths of humans and machines. For a successful collaboration, it is important that the members of this human - car team develop, maintain and update each others behavioral models. We consider mutual trust as an integral part of these models. In this paper, we present a review of the quantitative models of trust in automation. We found that only a few models of humans’ trust on automation exist in literature that account for the dynamic nature of trust and may be leveraged in human car interaction. However, these models do not support mutual trust. Our review suggests that there is significant scope for future research in the domain of mutual trust modeling for human car interaction, especially, when considered over the lifetime of the vehicle. Hardware and computational framework (for sensing, data aggregation, processing and modeling) must be developed to support these adaptive models over the operational phase of autonomous vehicles. In order to further research in mutual human - automation trust, we propose a framework for integrating Mutual Trust compu- tation into standard Human - Robot Interaction research platforms. This framework includes User trust and Agent trust, the two fundamental components of Mutual trust. It allows us to harness multi-modal sensor data from the car as well as from the user’s wearable or handheld device. The proposed framework provides access to prior trust aggregate and other cars’ experience data from the Cloud and to feature primitives like gaze, facial expression, etc. from a standard low-cost Human - Robot Interaction platform.