Improving Human-AI Collaboration With Descriptions of AI Behavior
Cabrera, Ángel Alexander, Perer, Adam, Hong, Jason I.
–arXiv.org Artificial Intelligence
To effectively work with AI aids, people need to know when to either accept or override an AI's output. People decide when to rely on an AI by using their mental models [3, 30], or internal representations, of how the AI tends to behave: when it is most accurate, when it is most likely to fail, etc. A detailed and accurate mental model allows a person to effectively complement an AI system by appropriately relying [37] on its output, while an overly simple or wrong mental model can lead to blind spots and systematic failures [3, 8]. At worst, people can perform worse than they would have unassisted, such as clinicians who made more errors than average when shown incorrect AI predictions [7, 24]. Mental models are an inherently incomplete representation of any system, but numerous factors make it especially challenging to develop adequate mental models of AI systems. First, modern AI systems are often black-box models for which humans cannot see how or why the model made a prediction [54].
arXiv.org Artificial Intelligence
Jan-5-2023
- Country:
- Asia (1.00)
- Europe (1.00)
- North America > United States
- California (0.46)
- Pennsylvania (0.28)
- Genre:
- Research Report
- Experimental Study > Negative Result (0.46)
- New Finding (1.00)
- Research Report
- Industry:
- Health & Medicine > Therapeutic Area (0.56)
- Technology:
- Information Technology > Artificial Intelligence
- Issues > Social & Ethical Issues (1.00)
- Machine Learning (1.00)
- Natural Language (1.00)
- Representation & Reasoning (1.00)
- Vision (1.00)
- Information Technology > Artificial Intelligence