How to Explain Individual Classification Decisions
Baehrens, David, Schroeter, Timon, Harmeling, Stefan, Kawanabe, Motoaki, Hansen, Katja, Mueller, Klaus-Robert
After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the most likely label of a given unseen data point. However, most methods will provide no answer why the model predicted the particular label for a single instance and what features were most influential for that particular instance. The only method that is currently able to provide such explanations are decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to explain the decisions of any classification method.
Dec-6-2009
- Country:
- Europe
- Germany
- Baden-Württemberg > Tübingen Region
- Tübingen (0.14)
- Berlin (0.04)
- Baden-Württemberg > Tübingen Region
- United Kingdom > England
- Oxfordshire > Oxford (0.04)
- Germany
- North America > United States
- California > Santa Clara County
- Stanford (0.04)
- New York (0.04)
- California > Santa Clara County
- Europe
- Genre:
- Research Report > New Finding (0.93)
- Industry: