Generalized SHAP: Generating multiple types of explanations in machine learning
Many important questions about a model cannot be answered just by explaining how much each feature contributes to its output. To answer a broader set of questions, we generalize a popular, mathematically well-grounded explanation technique, Shapley Additive Explanations (SHAP). Our new method - Generalized Shapley Additive Explanations (G-SHAP) - produces many additional types of explanations, including: 1) General classification explanations; Why is this sample more likely to belong to one class rather than another? 2) Intergroup differences; Why do our model's predictions differ between groups of observations? 3) Model failure; Why does our model perform poorly on a given sample? We formally define these types of explanations and illustrate their practical use on real data.
Jun-15-2020
- Country:
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.14)
- Genre:
- Research Report (0.64)
- Industry:
- Banking & Finance > Economy (1.00)
- Government > Regional Government
- Health & Medicine (1.00)
- Technology: