Please Stop Explaining Black Box Models for High Stakes Decisions

arXiv.org Machine Learning

Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward - it is to design models that are inherently interpretable.


Stop Talking Gobbledygook to the Business - InformationWeek

#artificialintelligence

Artificial intelligence and machine learning can deliver unprecedented value to the business. Unfortunately, fantastic findings often get lost in translation. To avoid expensive blank stares and stakeholder frustration, data science practitioners also need to master the art of interpreting and explaining results in simple, plain terms business people can understand. In my work with numerous organizations implementing cutting-edge machine learning technology over the past two and half years, I have seen one common recurring problem. Countless data science experts assume that they are communicating results successfully when they are not.


Trusting Machine Learning Models with LIME from Data Skeptic

#artificialintelligence

Episode Info: Machine learning models are often criticized for being black boxes. If a human cannot determine why the model arrives at the decision it made, there's good cause for skepticism. Classic inspection approaches to model interpretability are only useful for simple models, which are likely to only cover simple problems. The LIME project seeks to help us trust machine learning models. At a high level, it takes advantage of local fidelity.


Towards Interpretable Explanations for Transfer Learning in Sequential Tasks

AAAI Conferences

People increasingly rely on machine learning (ML) to make intelligent decisions. However, the ML results are often difficult to interpret and the algorithms do not support interaction to solicit clarification or explanation. In this paper, we highlight an emerging research area of interpretable explanations for transfer learning in sequential tasks, in which an agent must explain how it learns a new task given prior, common knowledge. The goal is to enhance a user's ability to trust and use the system output and to enable iterative feedback for improving the system. We review prior work in probabilistic systems, sequential decision-making, interpretable explanations, transfer learning, and interactive machine learning, and identify an intersection that deserves further research focus. We believe that developing adaptive, transparent learning models will build the foundation for better human-machine systems in applications for elder care, education, and health care.


Human Interpretable Machine Learning (Part 1) -- The Need and Importance of Model Interpretation

@machinelearnbot

Thanks to all the wonderful folks at DataScience.com and especially Pramit Choudhary for helping me discover the amazing world of model interpretation. The field of Machine Learning has gone through some phenomenal changes over the last decade. Starting off as just a pure academic and research-oriented domain, we have seen widespread industry adoption across diverse domains including retail, technology, healthcare, science and many more. Rather than just running lab experiments to publish a research paper, the key objective of data science and machine learning in the 21st century has changed to tackling and solving real-world problems, automating complex tasks and making our life easier and better. More than often, the standard toolbox of machine learning, statistical or deep learning models remain the same.