InterpretML: A Unified Framework for Machine Learning Interpretability
Nori, Harsha, Jenkins, Samuel, Koch, Paul, Caruana, Rich
InterpretML is an open-source Python package which exposes machine learning interpretability algorithms to practitioners and researchers. InterpretML exposes two types of interpretability - glassbox models, which are machine learning models designed for interpretability (ex: linear models, rule lists, generalized additive models), and blackbox explainability techniques for explaining existing systems (ex: Partial Dependence, LIME). The package enables practitioners to easily compare interpretability algorithms by exposing multiple methods under a unified API, and by having a built-in, extensible visualization platform. InterpretML also includes the first implementation of the Explainable Boosting Machine, a powerful, interpretable, glassbox model that can be as accurate as many blackbox models. The MIT licensed source code can be downloaded from github.com/microsoft/interpret.
Sep-19-2019
- Country:
- Asia > China
- North America > United States
- California > San Francisco County
- San Francisco (0.14)
- Illinois > Cook County
- Chicago (0.04)
- New York > New York County
- New York City (0.05)
- Washington > King County
- Redmond (0.04)
- California > San Francisco County
- Oceania > Australia
- New South Wales > Sydney (0.04)
- Genre:
- Research Report (0.65)
- Industry:
- Health & Medicine > Therapeutic Area (0.32)
- Technology: