Yang, X. Jessie
TIP: A Trust Inference and Propagation Model in Multi-Human Multi-Robot Teams
Guo, Yaohui, Yang, X. Jessie, Shi, Cong
Trust has been identified as a central factor for effective human-robot teaming. Existing literature on trust modeling predominantly focuses on dyadic human-autonomy teams where one human agent interacts with one robot. There is little, if not no, research on trust modeling in teams consisting of multiple human agents and multiple robotic agents. To fill this research gap, we present the trust inference and propagation (TIP) model for trust modeling in multi-human multi-robot teams. We assert that in a multi-human multi-robot team, there exist two types of experiences that any human agent has with any robot: direct and indirect experiences. The TIP model presents a novel mathematical framework that explicitly accounts for both types of experiences. To evaluate the model, we conducted a human-subject experiment with 15 pairs of participants (N=30). Each pair performed a search and detection task with two drones. Results show that our TIP model successfully captured the underlying trust dynamics and significantly outperformed a baseline model. To the best of our knowledge, the TIP model is the first mathematical framework for computational trust modeling in multi-human multi-robot teams.
From the Head or the Heart? An Experimental Design on the Impact of Explanation on Cognitive and Affective Trust
Zhang, Qiaoning, Yang, X. Jessie, Robert, Lionel P. Jr
Automated vehicles (AVs) are social robots that can potentially benefit our society. According to the existing literature, AV explanations can promote passengers' trust by reducing the uncertainty associated with the AV's reasoning and actions. However, the literature on AV explanations and trust has failed to consider how the type of trust - cognitive versus affective - might alter this relationship. Yet, the existing literature has shown that the implications associated with trust vary widely depending on whether it is cognitive or affective. To address this shortcoming and better understand the impacts of explanations on trust in AVs, we designed a study to investigate the effectiveness of explanations on both cognitive and affective trust. We expect these results to be of great significance in designing AV explanations to promote AV trust.
An Automated Vehicle (AV) like Me? The Impact of Personality Similarities and Differences between Humans and AVs
Zhang, Qiaoning, Esterwood, Connor, Yang, X. Jessie, Robert, Lionel P. Jr
To better understand the impacts of similarities and dissimilarities in human and AV personalities we conducted an experimental study with 443 individuals. Generally, similarities in human and AV personalities led to a higher perception of AV safety only when both were high in specific personality traits. Dissimilarities in human and AV personalities also yielded a higher perception of AV safety, but only when the AV was higher than the human in a particular personality trait.