On Two XAI Cultures: A Case Study of Non-technical Explanations in Deployed AI System
–arXiv.org Artificial Intelligence
Explainable AI (XAI) research has been booming, but the question "$\textbf{To whom}$ are we making AI explainable?" is yet to gain sufficient attention. Not much of XAI is comprehensible to non-AI experts, who nonetheless, are the primary audience and major stakeholders of deployed AI systems in practice. The gap is glaring: what is considered "explained" to AI-experts versus non-experts are very different in practical scenarios. Hence, this gap produced two distinct cultures of expectations, goals, and forms of XAI in real-life AI deployments. We advocate that it is critical to develop XAI methods for non-technical audiences. We then present a real-life case study, where AI experts provided non-technical explanations of AI decisions to non-technical stakeholders, and completed a successful deployment in a highly regulated industry. We then synthesize lessons learned from the case, and share a list of suggestions for AI experts to consider when explaining AI decisions to non-technical stakeholders.
arXiv.org Artificial Intelligence
Dec-2-2021
- Country:
- North America > United States (1.00)
- Genre:
- Research Report (0.50)
- Industry:
- Technology:
- Information Technology > Artificial Intelligence
- Applied AI (0.85)
- Issues > Social & Ethical Issues (0.35)
- Machine Learning (1.00)
- Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence