Improving Human-AI Collaboration With Descriptions of AI Behavior

Cabrera, Ángel Alexander, Perer, Adam, Hong, Jason I.

arXiv.org Artificial Intelligence 

To effectively work with AI aids, people need to know when to either accept or override an AI's output. People decide when to rely on an AI by using their mental models [3, 30], or internal representations, of how the AI tends to behave: when it is most accurate, when it is most likely to fail, etc. A detailed and accurate mental model allows a person to effectively complement an AI system by appropriately relying [37] on its output, while an overly simple or wrong mental model can lead to blind spots and systematic failures [3, 8]. At worst, people can perform worse than they would have unassisted, such as clinicians who made more errors than average when shown incorrect AI predictions [7, 24]. Mental models are an inherently incomplete representation of any system, but numerous factors make it especially challenging to develop adequate mental models of AI systems. First, modern AI systems are often black-box models for which humans cannot see how or why the model made a prediction [54].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found