Interpretability: Cracking open the black box – Part III

#artificialintelligence 

Previously, we looked at the pitfalls with the default "feature importance" in tree based models, talked about permutation importance, LOOC importance, and Partial Dependence Plots. Now let's switch lanes and look at a few model agnostic techniques which takes a bottom-up way of explaining predictions. Instead of looking at the model and trying to come up with global explanations like feature importance, these set of methods look at each single prediction and then try to explain them. As the name suggests, this is a model agnostic technique to generate local explanations to the model. The core idea behind the technique is quite intuitive. Suppose we have a complex classifier, with a highly non-linear decision boundary.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found