Goto

Collaborating Authors

 human-ai team performance




The Impact and Feasibility of Self-Confidence Shaping for AI-Assisted Decision-Making

Takayanagi, Takehiro, Hashimoto, Ryuji, Chen, Chung-Chi, Izumi, Kiyoshi

arXiv.org Artificial Intelligence

In AI-assisted decision-making, it is crucial but challenging for humans to appropriately rely on AI, especially in high-stakes domains such as finance and healthcare. This paper addresses this problem from a human-centered perspective by presenting an intervention for self-confidence shaping, designed to calibrate self-confidence at a targeted level. We first demonstrate the impact of self-confidence shaping by quantifying the upper-bound improvement in human-AI team performance. Our behavioral experiments with 121 participants show that self-confidence shaping can improve human-AI team performance by nearly 50% by mitigating both over- and under-reliance on AI. We then introduce a self-confidence prediction task to identify when our intervention is needed. Our results show that simple machine-learning models achieve 67% accuracy in predicting self-confidence. We further illustrate the feasibility of such interventions. The observed relationship between sentiment and self-confidence suggests that modifying sentiment could be a viable strategy for shaping self-confidence. Finally, we outline future research directions to support the deployment of self-confidence shaping in a real-world scenario for effective human-AI collaboration.


Don't be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration

Spitzer, Philipp, Holstein, Joshua, Morrison, Katelyn, Holstein, Kenneth, Satzger, Gerhard, Kühl, Niklas

arXiv.org Artificial Intelligence

Across various applications, humans increasingly use black-box artificial intelligence (AI) systems without insight into these systems' reasoning. To counter this opacity, explainable AI (XAI) methods promise enhanced transparency and interpretability. While recent studies have explored how XAI affects human-AI collaboration, few have examined the potential pitfalls caused by incorrect explanations. The implications for humans can be far-reaching but have not been explored extensively. To investigate this, we ran a study (n=160) on AI-assisted decision-making in which humans were supported by XAI. Our findings reveal a misinformation effect when incorrect explanations accompany correct AI advice with implications post-collaboration. This effect causes humans to infer flawed reasoning strategies, hindering task execution and demonstrating impaired procedural knowledge. Additionally, incorrect explanations compromise human-AI team-performance during collaboration. With our work, we contribute to HCI by providing empirical evidence for the negative consequences of incorrect explanations on humans post-collaboration and outlining guidelines for designers of AI.


On the Effect of Contextual Information on Human Delegation Behavior in Human-AI collaboration

Spitzer, Philipp, Holstein, Joshua, Hemmer, Patrick, Vössing, Michael, Kühl, Niklas, Martin, Dominik, Satzger, Gerhard

arXiv.org Artificial Intelligence

Despite the remarkable capabilities of AI in specialized tasks such as image recognition and natural language processing, its integration into human workflows remains a complex challenge [11]. In particular, the effectiveness of AI is fundamentally linked to how people perceive, understand, and ultimately use these systems [11, 17, 46, 67]. One of the primary interaction forms in human-AI collaboration is the concept of delegation [2, 27, 35, 69]. For such computer-supported and cooperative work (CSCW) scenarios, it is important to design human-AI collaboration systems in a way that humans are supported to identify and delegate task instances to an AI for which it is capable of making a correct decision and that humans conduct the task instances themselves for which the AI is incorrect [27]. In AI-assisted decision-making domains, the need to discern delegation becomes particularly crucial. This is underscored by the human-in-the-loop concept [73], where human oversight is not only imperative for ensuring effective collaboration but may also be mandated by regulatory frameworks [18].


Advancing Human-AI Complementarity: The Impact of User Expertise and Algorithmic Tuning on Joint Decision Making

Inkpen, Kori, Chappidi, Shreya, Mallari, Keri, Nushi, Besmira, Ramesh, Divya, Michelucci, Pietro, Mandava, Vani, Vepřek, Libuše Hannah, Quinn, Gabrielle

arXiv.org Artificial Intelligence

Human-AI collaboration for decision-making strives to achieve team performance that exceeds the performance of humans or AI alone. However, many factors can impact success of Human-AI teams, including a user's domain expertise, mental models of an AI system, trust in recommendations, and more. This work examines users' interaction with three simulated algorithmic models, all with similar accuracy but different tuning on their true positive and true negative rates. Our study examined user performance in a non-trivial blood vessel labeling task where participants indicated whether a given blood vessel was flowing or stalled. Our results show that while recommendations from an AI-Assistant can aid user decision making, factors such as users' baseline performance relative to the AI and complementary tuning of AI error types significantly impact overall team performance. Novice users improved, but not to the accuracy level of the AI. Highly proficient users were generally able to discern when they should follow the AI recommendation and typically maintained or improved their performance. Mid-performers, who had a similar level of accuracy to the AI, were most variable in terms of whether the AI recommendations helped or hurt their performance. In addition, we found that users' perception of the AI's performance relative on their own also had a significant impact on whether their accuracy improved when given AI recommendations. This work provides insights on the complexity of factors related to Human-AI collaboration and provides recommendations on how to develop human-centered AI algorithms to complement users in decision-making tasks.


The effectiveness of feature attribution methods and its correlation with automatic evaluation scores

Nguyen, Giang, Kim, Daeyoung, Nguyen, Anh

arXiv.org Artificial Intelligence

Explaining the decisions of an Artificial Intelligence (AI) model is increasingly critical in many real-world, high-stake applications. Hundreds of papers have either proposed new feature attribution methods, discussed or harnessed these tools in their work. However, despite humans being the target end-users, most attribution methods were only evaluated on proxy automatic-evaluation metrics [52, 66, 68]. In this paper, we conduct the first, large-scale user study on 320 lay and 11 expert users to shed light on the effectiveness of state-of-the-art attribution methods in assisting humans in ImageNet classification, Stanford Dogs fine-grained classification, and these two tasks but when the input image contains adversarial perturbations. We found that, in overall, feature attribution is surprisingly not more effective than showing humans nearest training-set examples. On a hard task of fine-grained dog categorization, presenting attribution maps to humans does not help, but instead hurts the performance of human-AI teams compared to AI alone. Importantly, we found automatic attribution-map evaluation measures to correlate poorly with the actual human-AI team performance. Our findings encourage the community to rigorously test their methods on the downstream human-in-the-loop applications and to rethink the existing evaluation metrics.