Machine Learning Explainability for External Stakeholders
Bhatt, Umang, Andrus, McKane, Weller, Adrian, Xiang, Alice
–arXiv.org Artificial Intelligence
As machine learning is increasingly deployed in high-stakes contexts affecting people's livelihoods, there have been growing calls to open the black box and to make machine learning algorithms more explainable. Providing useful explanations requires careful consideration of the needs of stakeholders, including end-users, regulators, and domain experts. Despite this need, little work has been done to facilitate inter-stakeholder conversation around explainable machine learning. To help address this gap, we conducted a closed-door, day-long workshop between academics, industry experts, legal scholars, and policymakers to develop a shared language around explainability and to understand the current shortcomings of and potential solutions for deploying explainable machine learning in service of transparency goals. We also asked participants to share case studies in deploying explainable machine learning at scale. In this paper, we provide a short summary of various case studies of explainable machine learning, lessons from those studies, and discuss open challenges.
arXiv.org Artificial Intelligence
Jul-10-2020
- Country:
- Europe > United Kingdom
- England
- Cambridgeshire > Cambridge (0.04)
- Greater London > London (0.04)
- England
- North America > United States
- California > San Francisco County
- San Francisco (0.14)
- New York > New York County
- New York City (0.04)
- California > San Francisco County
- Europe > United Kingdom
- Genre:
- Instructional Material > Course Syllabus & Notes (0.34)
- Research Report (0.84)
- Industry:
- Government (1.00)
- Health & Medicine
- Consumer Health (0.68)
- Health Care Providers & Services (0.68)
- Information Technology > Security & Privacy (1.00)
- Law (0.88)
- Technology: