CNN Heat Maps: Gradients vs. DeconvNets vs. Guided Backpropagation -


This post summarizes three closely related methods for creating saliency maps: Gradients (2013), DeconvNets (2014), and Guided Backpropagation (2014). Saliency maps are heat maps that are intended to provide insight into what aspects of an input image a convolutional neural network is using to make a prediction. All three of the methods discussed in this post are a form of post-hoc attention, which is different from trainable attention. Although in the original papers these methods are described in different ways, it turns out that they are all identical except for the way that they handle backpropagation through the ReLU nonlinearity. Please stay tuned for the next post, "CNN Heat Maps: Sanity Checks for Saliency Maps" for a discussion of a 2018 paper by Adebayo et al. which suggests that out of these three popular methods, only "Gradients" is effective.

Basics of AI - Backpropagation (Video)


This is a superb and very simple explanation of one of the basic concepts of AI. This should be included in most advanced math curriculum if only to demystify what AI really is. It also explains why AI can now distinguish dogs from cats or recognize individuals.) Source: End to end machine learning library.

Neural Networks: Feedforward and Backpropagation Explained


Mathematically, this is why we need to understand partial derivatives, since they allow us to compute the relationship between components of the neural network and the cost function. And as should be obvious, we want to minimize the cost function. When we know what affects it, we can effectively change the relevant weights and biases to minimize the cost function. If you are not a math student or have not studied calculus, this is not at all clear. So let me try to make it more clear. The squished'd' is the partial derivative sign.

Universal Backpropagation in 15 lines of code - Ray Born - Medium


Backpropagation is the core of today's deep learning applications. If you've dabbled with deep learning there's a good chance you're aware of the concept. If you've ever implemented backpropagation manually, you're probably very grateful that deep learning libraries automatically do it for you. Implementing backprop by hand is arduous, yet the concept behind general backpropagation is very simple. Did you know that the eager execution-restricted backprop is almost trivial?

Backpropagation for people who are afraid of math


Backpropagation is one of the most important concepts in machine learning. There are many online resources that explain the intuition behind this algorithm (IMO the best of these is the backpropagation lecture in the Stanford cs231n video lectures. Another very good source, is this), but getting from the intuition to practice, can be (put gently) quite challenging. After spending more hours then i'd like to admit, trying to get all the sizes of my layers and weights to fit, constantly forgetting what's what, and what's connected where, I sat down and drew some diagrams that illustrates the entire process. Consider it a visual pseudocode.

Backpropagation-Friendly Eigendecomposition Machine Learning

Eigendecomposition (ED) is widely used in deep networks. However, the backpropagation of its results tends to be numerically unstable, whether using ED directly or approximating it with the Power Iteration method, particularly when dealing with large matrices. While this can be mitigated by partitioning the data in small and arbitrary groups, doing so has no theoretical basis and makes its impossible to exploit the power of ED to the full. In this paper, we introduce a numerically stable and differentiable approach to leveraging eigenvectors in deep networks. It can handle large matrices without requiring to split them. We demonstrate the better robustness of our approach over standard ED and PI for ZCA whitening, an alternative to batch normalization, and for PCA denoising, which we introduce as a new normalization strategy for deep networks, aiming to further denoise the network's features.

A Closer Look at Double Backpropagation Machine Learning

In recent years, an increasing number of neural network models have included derivatives with respect to inputs in their loss functions, resulting in so-called double backpropagation for first-order optimization. However, so far no general description of the involved derivatives exists. Here, we cover a wide array of special cases in a very general Hilbert space framework, which allows us to provide optimized backpropagation rules for many real-world scenarios. This includes the reduction of calculations for Frobenius-norm-penalties on Jacobians by roughly a third for locally linear activation functions. Furthermore, we provide a description of the discontinuous loss surface of ReLU networks both in the inputs and the parameters and demonstrate why the discontinuities do not pose a big problem in reality.

Memorized Sparse Backpropagation Machine Learning

Neural network learning is typically slow since backpropagation needs to compute full gradients and backpropagate them across multiple layers. Despite its success of existing work in accelerating propagation through sparseness, the relevant theoretical characteristics remain unexplored and we empirically find that they suffer from the loss of information contained in unpropagated gradients. To tackle these problems, in this work, we present a unified sparse backpropagation framework and provide a detailed analysis of its theoretical characteristics. Analysis reveals that when applied to a multilayer perceptron, our framework essentially performs gradient descent using an estimated gradient similar enough to the true gradient, resulting in convergence in probability under certain conditions. Furthermore, a simple yet effective algorithm named memorized sparse backpropagation (MSBP) is proposed to remedy the problem of information loss by storing unpropagated gradients in memory for the next learning. The experiments demonstrate that the proposed MSBP is able to effectively alleviate the information loss in traditional sparse backpropagation while achieving comparable acceleration.

A Graph Theoretic Framework of Recomputation Algorithms for Memory-Efficient Backpropagation Machine Learning

Recomputation algorithms collectively refer to a family of methods that aims to reduce the memory consumption of the backpropagation by selectively discarding the intermediate results of the forward propagation and recomputing the discarded results as needed. In this paper, we will propose a novel and efficient recomputation method that can be applied to a wider range of neural nets than previous methods. We use the language of graph theory to formalize the general recomputation problem of minimizing the computational overhead under a fixed memory budget constraint, and provide a dynamic programming solution to the problem. Our method can reduce the peak memory consumption on various benchmark networks by 36% 81%, which outperforms the reduction achieved by other methods.