Causality-based Explanation of Classification Outcomes

Bertossi, Leopoldo, Li, Jordan, Schleich, Maximilian, Suciu, Dan, Vagena, Zografoula

arXiv.org Artificial Intelligence 

Machine-learning (ML) models are increasingly used today in making decisions that affect real people's lives, and, because of that, there is a huge need to ensure that the models and their decisions are interpretable by their human users. Motivated by this need, there has bee a lot of interest recently in the ML community in studying Interpretable models [18]. There is currently no consensus on what interpretability means, and no benchmarks for evaluating interpretability [5, 10]. The only consensus is that simpler models such as linear regression or decision trees are considered more interpretable than complex models like, say, deep neural nets. However, two general principles for approaching interpretability have emerged in the literature that are relevant to our paper.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found