CNN-based explanation ensembling for dataset, representation and explanations evaluation

Hryniewska-Guzik, Weronika, Longo, Luca, Biecek, Przemysław

arXiv.org Artificial Intelligence 

Deep learning models, despite their unprecedented success [1, 2], lack full transparency and interpretability in their decision-making processes [3, 4]. This has led to growing concerns about the use of "black box" models and the need for explanations to better understand their inferential process [5]. Using examples of specific cases from a dataset, generated explanations might reveal which elements are most important in a model's prediction [6, 7, 8]. Currently, explanations generated for a trained deep learning models often are presented as individual insights that need to be investigated separately and then compared [9]. Each explanation provides a limited view of the model's decision, as it tends to focuse on specific aspects, making it challenging for a human to obtain a comprehensive understanding. This approach hinders the ability to discern the reasons behind a model's predictions. There has been an emerging trend in explanation ensembling, which is derived from model ensembling, which involves combining multiple predictive models to reduce variation of predictions which often leads to higher overall performance. Examples of such predictive techniques are random forests [10] and gradient boosting [11]. This tendency shows that it is plausible that individual explanations possess unique pieces of information that, when combined, might form a more comprehensive and accurate understanding of a model's inferential process.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found