training point
Representer Point Selection for Explaining Deep Neural Networks
We propose to explain the predictions of a deep neural network, by pointing to the set of what we call representer points in the training set, for a given test point prediction. Specifically, we show that we can decompose the pre-activation prediction of a neural network into a linear combination of activations of training points, with the weights corresponding to what we call representer values, which thus capture the importance of that training point on the learned parameters of the network. But it provides a deeper understanding of the network than simply training point influence: with positive representer values corresponding to excitatory training points, and negative values corresponding to inhibitory points, which as we show provides considerably more insight. Our method is also much more scalable, allowing for real-time feedback in a manner not feasible with influence functions.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > Canada > Quebec > Montreal (0.04)
OntheAccuracyofInfluenceFunctions forMeasuringGroupEffects
Influence functions estimate the effect of removing a training point on a model without theneedtoretrain. Theyarebasedonafirst-order Taylorapproximation thatisguaranteed tobeaccurate forsufficiently small changes tothemodel, and so are commonly used to study the effect of individual points in large datasets. However, we often want to study the effects of largegroups of training points, e.g., todiagnose batch effects orapportion credit between different data sources.
- Asia > Middle East > Jordan (0.05)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
Model Agnostic Supervised Local Explanations
Gregory Plumb, Denali Molitor, Ameet S. Talwalkar
Model interpretability is an increasingly important component of practical machine learning. Some ofthemost common forms ofinterpretability systems are example-based, local, and global explanations. One of the main challenges in interpretability isdesigning explanation systems thatcancapture aspects ofeach of these explanation types, in order to develop a more thorough understanding of the model. We address this challenge in a novel model called MAPLE that useslocallinearmodeling techniques alongwithadualinterpretation ofrandom forests (both as a supervised neighborhood approach and as a feature selection method).
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.05)
- North America > Canada > Quebec > Montreal (0.04)
- North America > United States (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- (4 more...)
- South America > Colombia > Bogotá D.C. > Bogotá (0.14)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > Switzerland (0.04)
- (2 more...)
- Research Report > Experimental Study (0.65)
- Research Report > New Finding (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Inductive Learning (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Supervised Learning (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.93)