Model Agnostic Supervised Local Explanations
Gregory Plumb, Denali Molitor, Ameet S. Talwalkar
–Neural Information Processing Systems
Model interpretability is an increasingly important component of practical machine learning. Some ofthemost common forms ofinterpretability systems are example-based, local, and global explanations. One of the main challenges in interpretability isdesigning explanation systems thatcancapture aspects ofeach of these explanation types, in order to develop a more thorough understanding of the model. We address this challenge in a novel model called MAPLE that useslocallinearmodeling techniques alongwithadualinterpretation ofrandom forests (both as a supervised neighborhood approach and as a feature selection method).
Neural Information Processing Systems
Feb-14-2026, 06:21:50 GMT
- Country:
- North America
- Canada > Quebec
- Montreal (0.04)
- United States > Pennsylvania
- Allegheny County > Pittsburgh (0.05)
- Canada > Quebec
- North America
- Genre:
- Research Report (0.49)
- Technology: