Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making

Haduong, Nikita, Smith, Noah A.

arXiv.org Artificial Intelligence 

The potential is not necessarily realized, however, because of several challenges: debates on ethical resposibility of decisions [8, 26, 44], the human ability to recognize when AI advice should be taken [43], mental models (biases) regarding AI performance and ability [12, 27] to perform well on subjective tasks, and effects of how the AI advice is delivered [46]. Many research directions thus aim to resolve these barriers to complementarity in human-AI performance, including examining the effects of having AI systems explain predictions [4] using explainable AI (XAI) methods, introducing cognitive forcing functions when presenting AI advice [6], adjusting AI advice interactions/presentation methods [40], and adjusting task framing to account for mental models about the types of tasks AI can work with [9]. In AI-assisted decision making, the human makes the final decision, bearing full responsibility for its consequences. Performance pressure from responsibility can influence decision making behavior [2]. The bulk of research working towards complementary human-AI performance isolates human behavior away from the effects of performance pressure because the field is rapidly evolving its understanding of how humans perceive and work with AI tools. Intrinsically high and low stakes tasks are used in these experiments, but the stakes have little tangible effect or implication for evaluators. Hence, we observe a gap in the literature of how people rely on AI assistants under performance pressure, or when stakes matter personally. In this work, we seek to understand how performance pressure affects AI advice usage when AI advice is provided as a second opinion. We induce performance pressure through a pay-by-performance scheme framed as a loss.