Goto

Collaborating Authors

 Neumann, David


A Privacy Preserving System for Movie Recommendations Using Federated Learning

arXiv.org Artificial Intelligence

Recommender systems have become ubiquitous in the past years. They solve the tyranny of choice problem faced by many users, and are utilized by many online businesses to drive engagement and sales. Besides other criticisms, like creating filter bubbles within social networks, recommender systems are often reproved for collecting considerable amounts of personal data. However, to personalize recommendations, personal information is fundamentally required. A recent distributed learning scheme called federated learning has made it possible to learn from personal user data without its central collection. Consequently, we present a recommender system for movie recommendations, which provides privacy and thus trustworthiness on multiple levels: First and foremost, it is trained using federated learning and thus, by its very nature, privacy-preserving, while still enabling users to benefit from global insights. Furthermore, a novel federated learning scheme, called FedQ, is employed, which not only addresses the problem of non-i.i.d.-ness and small local datasets, but also prevents input data reconstruction attacks by aggregating client updates early. Finally, to reduce the communication overhead, compression is applied, which significantly compresses the exchanged neural network parametrizations to a fraction of their original size. We conjecture that this may also improve data privacy through its lossy quantization stage.


Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy

arXiv.org Artificial Intelligence

Deep Neural Networks (DNNs) are known to be strong predictors, but their prediction strategies can rarely be understood. With recent advances in Explainable Artificial Intelligence (XAI), approaches are available to explore the reasoning behind those complex models' predictions. Among post-hoc attribution methods, Layer-wise Relevance Propagation (LRP) shows high performance. For deeper quantitative analysis, manual approaches exist, but without the right tools they are unnecessarily labor intensive. In this software paper, we introduce three software packages targeted at scientists to explore model reasoning using attribution approaches and beyond: (1) Zennit - a highly customizable and intuitive attribution framework implementing LRP and related approaches in PyTorch, (2) CoRelAy - a framework to easily and quickly construct quantitative analysis pipelines for dataset-wide analyses of explanations, and (3) ViRelAy - a web-application to interactively explore data, attributions, and analysis results. With this, we provide a standardized implementation solution for XAI, to contribute towards more reproducibility in our field.


DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks

arXiv.org Artificial Intelligence

The field of video compression has developed some of the most sophisticated and efficient compression algorithms known in the literature, enabling very high compressibility for little loss of information. Whilst some of these techniques are domain specific, many of their underlying principles are universal in that they can be adapted and applied for compressing different types of data. In this work we present DeepCABAC, a compression algorithm for deep neural networks that is based on one of the state-of-the-art video coding techniques. Concretely, it applies a Context-based Adaptive Binary Arithmetic Coder (CABAC) to the network's parameters, which was originally designed for the H.264/AVC video coding standard and became the state-of-the-art for lossless compression. Moreover, DeepCABAC employs a novel quantization scheme that minimizes the rate-distortion function while simultaneously taking the impact of quantization onto the accuracy of the network into account. Experimental results show that DeepCABAC consistently attains higher compression rates than previously proposed coding techniques for neural network compression. For instance, it is able to compress the VGG16 ImageNet model by x63.6 with no loss of accuracy, thus being able to represent the entire network with merely 8.7MB. The source code for encoding and decoding can be found at https://github.com/fraunhoferhhi/DeepCABAC.


DeepCABAC: Context-adaptive binary arithmetic coding for deep neural network compression

arXiv.org Artificial Intelligence

From all different proposed We present DeepCABAC, a novel contextadaptive methods, sparsification followed by weight quantization and binary arithmetic coder for compressing entropy coding arguably belong to the set of most popular deep neural networks. It quantizes each weight parameter approaches, since very high compression ratios can be by minimizing a weighted rate-distortion achieved under such paradigm (Han et al., 2015a; Louizos function, which implicitly takes the impact of et al., 2017; Wiedemann et al., 2018a;b). Whereas much of quantization on to the accuracy of the network research has focused on the sparsification part, a substantially into account. Subsequently, it compresses the less amount have focused on improving the later two quantized values into a bitstream representation steps. In fact, most of the proposed (post-sparsity) compression with minimal redundancies. We show that Deep-algorithms come with at least one of the following CABAC is able to reach very high compression caveats: 1) they decouple the quantization procedure from ratios across a wide set of different network architectures the subsequent lossless compression algorithm, 2) ignore and datasets. For instance, we are correlations between the parameters and 3) apply a lossless able to compress by x63.6 the VGG16 ImageNet compression algorithm that produce a bitstream with more model with no loss of accuracy, thus being able to redundancies than principally needed (e.g.