Beyond Accuracy, SHAP, and Anchors -- On the difficulty of designing effective end-user explanations
Omar, Zahra Abba, Nahar, Nadia, Tjaden, Jacob, Gilles, Inès M., Mekonnen, Fikir, Hsieh, Jane, Kästner, Christian, Menon, Alka
–arXiv.org Artificial Intelligence
Modern machine learning produces models that are impossible for users or developers to fully understand -- raising concerns about trust, oversight and human dignity. Transparency and explainability methods aim to provide some help in understanding models, but it remains challenging for developers to design explanations that are understandable to target users and effective for their purpose. Emerging guidelines and regulations set goals but may not provide effective actionable guidance to developers. In a controlled experiment with 124 participants, we investigate whether and how specific forms of policy guidance help developers design explanations for an ML-powered screening tool for diabetic retinopathy. Contrary to our expectations, we found that participants across the board struggled to produce quality explanations, comply with the provided policy requirements for explainability, and provide evidence of compliance. We posit that participant noncompliance is in part due to a failure to imagine and anticipate the needs of their audience, particularly non-technical stakeholders. Drawing on cognitive process theory and the sociological imagination to contextualize participants' failure, we recommend educational interventions.
arXiv.org Artificial Intelligence
Jan-28-2025
- Country:
- Europe > United Kingdom
- England > Oxfordshire > Oxford (0.28)
- North America > United States (1.00)
- Europe > United Kingdom
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Strength High (1.00)
- Research Report
- Industry:
- Technology: