Gradient Weighted Superpixels for Interpretability in CNNs

Hartley, Thomas, Sidorov, Kirill, Willis, Christopher, Marshall, David

arXiv.org Machine Learning 

Convolutional Neural Networks (CNNs) are often described as black boxes due to the difficulty in explaining how they reach their final output for a given task. Consequently a number of techniques have been developed to aid in the process of explainability. These techniques range from the scoring of individual pixels to reflect their impact on the networks decision making, to the scoring of larger regions of the image. Scoring larger regions allows for the results to be more easily interpreted. A popular technique for explaining images is LIME [10]. This uses superpixels, contiguous regions for visualisation, allowing a level of interpretability that may not be present in individual pixel scoring. However, this increased interpretability comes at a cost. The LIME technique relies on perturbing the input image and repeatedly passing it to the network to build an understanding of how important each superpixel region is to the final classification. This requires multiple perturbed images to be passed through the network, by default 1000 in the released code.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found