How to Explain Individual Classification Decisions
Baehrens, David, Schroeter, Timon, Harmeling, Stefan, Kawanabe, Motoaki, Hansen, Katja, Mueller, Klaus-Robert
After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the most likely label of a given unseen data point. However, most methods will provide no answer why the model predicted the particular label for a single instance and what features were most influential for that particular instance. The only method that is currently able to provide such explanations are decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to explain the decisions of any classification method.
Dec-6-2009
- Country:
- Europe > Germany
- Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- North America > United States (0.68)
- Europe > Germany
- Genre:
- Research Report > New Finding (0.93)
- Industry:
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
- Materials (0.68)