Gradient Weighted Superpixels for Interpretability in CNNs
Hartley, Thomas, Sidorov, Kirill, Willis, Christopher, Marshall, David
Convolutional Neural Networks (CNNs) are often described as black boxes due to the difficulty in explaining how they reach their final output for a given task. Consequently a number of techniques have been developed to aid in the process of explainability. These techniques range from the scoring of individual pixels to reflect their impact on the networks decision making, to the scoring of larger regions of the image. Scoring larger regions allows for the results to be more easily interpreted. A popular technique for explaining images is LIME [10]. This uses superpixels, contiguous regions for visualisation, allowing a level of interpretability that may not be present in individual pixel scoring. However, this increased interpretability comes at a cost. The LIME technique relies on perturbing the input image and repeatedly passing it to the network to build an understanding of how important each superpixel region is to the final classification. This requires multiple perturbed images to be passed through the network, by default 1000 in the released code.
Aug-16-2019
- Country:
- Europe > United Kingdom
- North America > United States
- California > San Francisco County
- San Francisco (0.14)
- District of Columbia > Washington (0.04)
- California > San Francisco County
- Oceania > Australia
- New South Wales > Sydney (0.04)
- Genre:
- Research Report (0.64)
- Technology: