Goto

Collaborating Authors

 interpretml


Machine Learning Model Interpretability and Explainability

#artificialintelligence

ML/AI models are getting more complex and challenging to interpret and explain. A simple, easy-to-explain regression or decision tree model can no longer fully satisfy technical and business needs. More and more people use ensemble methods and deep neural networks to get better predictions and accuracy. However, those more complex models are hard to explain, debug, and understand. Thus, many people call these models black-box models.


InterpretML

#artificialintelligence

Model interpretability helps developers, data scientists and business stakeholders in the organization gain a comprehensive understanding of their machine learning models. It can also be used to debug models, explain predictions and enable auditing to meet compliance with regulatory requirements.


An Explanation for eXplainable AI

#artificialintelligence

Artificial intelligence (AI) has been integrated into every part of our lives. A chatbot, enabled by advanced Natural language processing (NLP), pops to assist you while you surf a webpage. A voice recognition system can authenticate you in order to unlock your account. A drone or driverless car can service operations or access areas that are humanly impossible. Machine-learning (ML) predictions are utilized to all kinds of decision making.



InterpretML: A Unified Framework for Machine Learning Interpretability

Nori, Harsha, Jenkins, Samuel, Koch, Paul, Caruana, Rich

arXiv.org Machine Learning

InterpretML is an open-source Python package which exposes machine learning interpretability algorithms to practitioners and researchers. InterpretML exposes two types of interpretability - glassbox models, which are machine learning models designed for interpretability (ex: linear models, rule lists, generalized additive models), and blackbox explainability techniques for explaining existing systems (ex: Partial Dependence, LIME). The package enables practitioners to easily compare interpretability algorithms by exposing multiple methods under a unified API, and by having a built-in, extensible visualization platform. InterpretML also includes the first implementation of the Explainable Boosting Machine, a powerful, interpretable, glassbox model that can be as accurate as many blackbox models. The MIT licensed source code can be downloaded from github.com/microsoft/interpret.