Accurate estimation of feature importance faithfulness for tree models

Gajewski, Mateusz, Karczmarz, Adam, Rapicki, Mateusz, Sankowski, Piotr

arXiv.org Artificial Intelligence 

One of the key challenges in deploying modern machine learning models in such areas as medical diagnosis lies in the ability to indicate why a certain prediction has been made. Such an indication may be of critical importance when a human decides whether the prediction can be relied on. This is one of the reasons various aspects of explainability of machine learning models have been the subject of extensive research lately (see, e.g., [BH21]). For some basic types of models (e.g., single decision trees), the rationale behind a prediction is easy to understand by a human. However, predictions of more complex models (that offer much better accuracy, e.g., based on neural networks or decision tree ensembles) are also much more difficult to interpret. Accurate and concise explanations understandable to humans might not always exist. In such cases, it is still beneficial to have methods giving a flavor of what factors might have influenced the prediction the most.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found