Goto

Collaborating Authors

 Osman, Ahmed


Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI

arXiv.org Artificial Intelligence

The rise of deep learning in today's applications entailed an increasing need in explaining the model's decisions beyond prediction performances in order to foster trust and accountability. Recently, the field of explainable AI (XAI) has developed methods that provide such explanations for already trained neural networks. In computer vision tasks such explanations, termed heatmaps, visualize the contributions of individual pixels to the prediction. So far XAI methods along with their heatmaps were mainly validated qualitatively via human-based assessment, or evaluated through auxiliary proxy tasks such as pixel perturbation, weak object localization or randomization tests. Due to the lack of an objective and commonly accepted quality measure for heatmaps, it was debatable which XAI method performs best and whether explanations can be trusted at all. In the present work, we tackle the problem by proposing a ground truth based evaluation framework for XAI methods based on the CLEVR visual question answering task. Our framework provides a (1) selective, (2) controlled and (3) realistic testbed for the evaluation of neural network explanations. We compare ten different explanation methods, resulting in new insights about the quality and properties of XAI methods, sometimes contradicting with conclusions from previous comparative studies. The CLEVR-XAI dataset and the benchmarking code can be found at https://github.com/ahmedmagdiosman/clevr-xai.


DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks

arXiv.org Artificial Intelligence

The field of video compression has developed some of the most sophisticated and efficient compression algorithms known in the literature, enabling very high compressibility for little loss of information. Whilst some of these techniques are domain specific, many of their underlying principles are universal in that they can be adapted and applied for compressing different types of data. In this work we present DeepCABAC, a compression algorithm for deep neural networks that is based on one of the state-of-the-art video coding techniques. Concretely, it applies a Context-based Adaptive Binary Arithmetic Coder (CABAC) to the network's parameters, which was originally designed for the H.264/AVC video coding standard and became the state-of-the-art for lossless compression. Moreover, DeepCABAC employs a novel quantization scheme that minimizes the rate-distortion function while simultaneously taking the impact of quantization onto the accuracy of the network into account. Experimental results show that DeepCABAC consistently attains higher compression rates than previously proposed coding techniques for neural network compression. For instance, it is able to compress the VGG16 ImageNet model by x63.6 with no loss of accuracy, thus being able to represent the entire network with merely 8.7MB. The source code for encoding and decoding can be found at https://github.com/fraunhoferhhi/DeepCABAC.


DeepCABAC: Context-adaptive binary arithmetic coding for deep neural network compression

arXiv.org Artificial Intelligence

From all different proposed We present DeepCABAC, a novel contextadaptive methods, sparsification followed by weight quantization and binary arithmetic coder for compressing entropy coding arguably belong to the set of most popular deep neural networks. It quantizes each weight parameter approaches, since very high compression ratios can be by minimizing a weighted rate-distortion achieved under such paradigm (Han et al., 2015a; Louizos function, which implicitly takes the impact of et al., 2017; Wiedemann et al., 2018a;b). Whereas much of quantization on to the accuracy of the network research has focused on the sparsification part, a substantially into account. Subsequently, it compresses the less amount have focused on improving the later two quantized values into a bitstream representation steps. In fact, most of the proposed (post-sparsity) compression with minimal redundancies. We show that Deep-algorithms come with at least one of the following CABAC is able to reach very high compression caveats: 1) they decouple the quantization procedure from ratios across a wide set of different network architectures the subsequent lossless compression algorithm, 2) ignore and datasets. For instance, we are correlations between the parameters and 3) apply a lossless able to compress by x63.6 the VGG16 ImageNet compression algorithm that produce a bitstream with more model with no loss of accuracy, thus being able to redundancies than principally needed (e.g.


Evaluating Recurrent Neural Network Explanations

arXiv.org Machine Learning

Recently, several methods have been proposed to explain the predictions of recurrent neural networks (RNNs), in particular of LSTMs. The goal of these methods is to understand the network's decisions by assigning to each input variable, e.g., a word, a relevance indicating to which extent it contributed to a particular prediction. In previous works, some of these methods were not yet compared to one another, or were evaluated only qualitatively. We close this gap by systematically and quantitatively comparing these methods in different settings, namely (1) a toy arithmetic task which we use as a sanity check, (2) a five-class sentiment prediction of movie reviews, and besides (3) we explore the usefulness of word relevances to build sentence-level representations. Lastly, using the method that performed best in our experiments, we show how specific linguistic phenomena such as the negation in sentiment analysis reflect in terms of relevance patterns, and how the relevance visualization can help to understand the misclassification of individual samples.


Dual Recurrent Attention Units for Visual Question Answering

arXiv.org Machine Learning

We propose an architecture for VQA which utilizes recurrent layers to generate visual and textual attention. The memory characteristic of the proposed recurrent attention units offers a rich joint embedding of visual and textual features and enables the model to reason relations between several parts of the image and question. Our single model outperforms the first place winner on the VQA 1.0 dataset, performs within margin to the current state-of-the-art ensemble model. We also experiment with replacing attention mechanisms in other state-of-the-art models with our implementation and show increased accuracy. In both cases, our recurrent attention mechanism improves performance in tasks requiring sequential or relational reasoning on the VQA dataset.