Investigating the influence of noise and distractors on the interpretation of neural networks

Kindermans, Pieter-Jan, Schütt, Kristof, Müller, Klaus-Robert, Dähne, Sven

arXiv.org Machine Learning 

Understanding neural networks is becoming increasingly important. Over the last few years different types of visualisation and explanation methods have been proposed. However, none of them explicitly considered the behaviour in the presence of noise and distracting elements. In this work, we will show how noise and distracting dimensions can influence the result of an explanation model. This gives a new theoretical insights to aid selection of the most appropriate explanation model within the deep-Taylor decomposition framework.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found