Model Agnostic Supervised Local Explanations
Plumb, Gregory, Molitor, Denali, Talwalkar, Ameet S.
–Neural Information Processing Systems
Model interpretability is an increasingly important component of practical machine learning. Some of the most common forms of interpretability systems are example-based, local, and global explanations. One of the main challenges in interpretability is designing explanation systems that can capture aspects of each of these explanation types, in order to develop a more thorough understanding of the model. We address this challenge in a novel model called MAPLE that uses local linear modeling techniques along with a dual interpretation of random forests (both as a supervised neighborhood approach and as a feature selection method). MAPLE has two fundamental advantages over existing interpretability systems.
Neural Information Processing Systems
Feb-14-2020, 10:56:48 GMT