Model Agnostic Supervised Local Explanations

Gregory Plumb, Denali Molitor, Ameet S. Talwalkar

Neural Information Processing Systems 

Model interpretability is an increasingly important component of practical machine learning. Some ofthemost common forms ofinterpretability systems are example-based, local, and global explanations. One of the main challenges in interpretability isdesigning explanation systems thatcancapture aspects ofeach of these explanation types, in order to develop a more thorough understanding of the model. We address this challenge in a novel model called MAPLE that useslocallinearmodeling techniques alongwithadualinterpretation ofrandom forests (both as a supervised neighborhood approach and as a feature selection method).

Similar Docs  Excel Report  more

TitleSimilaritySource
None found