Backpropagation


Universal Backpropagation in 15 lines of code - Ray Born - Medium

#artificialintelligence

Backpropagation is the core of today's deep learning applications. If you've dabbled with deep learning there's a good chance you're aware of the concept. If you've ever implemented backpropagation manually, you're probably very grateful that deep learning libraries automatically do it for you. Implementing backprop by hand is arduous, yet the concept behind general backpropagation is very simple. Did you know that the eager execution-restricted backprop is almost trivial?


Backpropagation for people who are afraid of math

#artificialintelligence

Backpropagation is one of the most important concepts in machine learning. There are many online resources that explain the intuition behind this algorithm (IMO the best of these is the backpropagation lecture in the Stanford cs231n video lectures. Another very good source, is this), but getting from the intuition to practice, can be (put gently) quite challenging. After spending more hours then i'd like to admit, trying to get all the sizes of my layers and weights to fit, constantly forgetting what's what, and what's connected where, I sat down and drew some diagrams that illustrates the entire process. Consider it a visual pseudocode.


A Closer Look at Double Backpropagation

arXiv.org Machine Learning

In recent years, an increasing number of neural network models have included derivatives with respect to inputs in their loss functions, resulting in so-called double backpropagation for first-order optimization. However, so far no general description of the involved derivatives exists. Here, we cover a wide array of special cases in a very general Hilbert space framework, which allows us to provide optimized backpropagation rules for many real-world scenarios. This includes the reduction of calculations for Frobenius-norm-penalties on Jacobians by roughly a third for locally linear activation functions. Furthermore, we provide a description of the discontinuous loss surface of ReLU networks both in the inputs and the parameters and demonstrate why the discontinuities do not pose a big problem in reality.


Memorized Sparse Backpropagation

arXiv.org Machine Learning

Neural network learning is typically slow since backpropagation needs to compute full gradients and backpropagate them across multiple layers. Despite its success of existing work in accelerating propagation through sparseness, the relevant theoretical characteristics remain unexplored and we empirically find that they suffer from the loss of information contained in unpropagated gradients. To tackle these problems, in this work, we present a unified sparse backpropagation framework and provide a detailed analysis of its theoretical characteristics. Analysis reveals that when applied to a multilayer perceptron, our framework essentially performs gradient descent using an estimated gradient similar enough to the true gradient, resulting in convergence in probability under certain conditions. Furthermore, a simple yet effective algorithm named memorized sparse backpropagation (MSBP) is proposed to remedy the problem of information loss by storing unpropagated gradients in memory for the next learning. The experiments demonstrate that the proposed MSBP is able to effectively alleviate the information loss in traditional sparse backpropagation while achieving comparable acceleration.


A Graph Theoretic Framework of Recomputation Algorithms for Memory-Efficient Backpropagation

arXiv.org Machine Learning

Recomputation algorithms collectively refer to a family of methods that aims to reduce the memory consumption of the backpropagation by selectively discarding the intermediate results of the forward propagation and recomputing the discarded results as needed. In this paper, we will propose a novel and efficient recomputation method that can be applied to a wider range of neural nets than previous methods. We use the language of graph theory to formalize the general recomputation problem of minimizing the computational overhead under a fixed memory budget constraint, and provide a dynamic programming solution to the problem. Our method can reduce the peak memory consumption on various benchmark networks by 36% 81%, which outperforms the reduction achieved by other methods.


Adaptively Truncating Backpropagation Through Time to Control Gradient Bias

arXiv.org Machine Learning

Truncated backpropagation through time (TBPTT) is a popular method for learning in recurrent neural networks (RNNs) that saves computation and memory at the cost of bias by truncating backpropagation after a fixed number of lags. In practice, choosing the optimal truncation length is difficult: TBPTT will not converge if the truncation length is too small, or will converge slowly if it is too large. We propose an adaptive TBPTT scheme that converts the problem from choosing a temporal lag to one of choosing a tolerable amount of gradient bias. For many realistic RNNs, the TBPTT gradients decay geometrically for large lags; under this condition, we can control the bias by varying the truncation length adaptively. For RNNs with smooth activation functions, we prove that this bias controls the convergence rate of SGD with biased gradients for our non-convex loss. Using this theory, we develop a practical method for adaptively estimating the truncation length during training. We evaluate our adaptive TBPTT method on synthetic data and language modeling tasks and find that our adaptive TBPTT ameliorates the computational pitfalls of fixed TBPTT.


The Backpropagation Algorithm Demystified – Nathalie Jeans – Medium

#artificialintelligence

One of the most important aspects of machine learning is its ability to recognize error margins in its output and be able to interpret data more precisely as increasing numbers of datasets are fed through its neural network. Commonly referred to as backpropagation, it is a process that isn't as complex as you might think. The first thing people think of when they hear the term "Machine Learning" goes a little something like the Matrix. All around, there are computers taking over the world, let alone the human race. In any case, people generally just want nothing to do with it.


Deep Learning – Backpropagation Algorithm Basics Vinod Sharma's Blog

#artificialintelligence

This algorithm uses supervised learning methods for training Artificial Neural Networks. The whole idea of training multi-layer perceptrons is to compute the derivatives of the error function or gradient descent with respect to weights using the backpropagation algorithm. This algorithm is actually based on the linear algebraic operation with a goal of optimising error function by harnessing its intelligence and provisioning updates. In this post, we will focus on backpropagation and basic details around it on a high level in simple English. As mentioned above "Backpropagation" is an algorithm which uses supervised learning methods to compute the gradient descent (delta rule) with respect to weights.


8 Tricks for Configuring Backpropagation to Train Better Neural Networks

#artificialintelligence

This section provides more resources on the topic if you are looking to go deeper. In this post, you discovered tips and tricks for getting the most out of the backpropagation algorithm when training neural network models. Have you tried any of these tricks on your projects? Let me know about your results in the comments below. Do you have any questions? Ask your questions in the comments below and I will do my best to answer.


Reinforcement Learning Without Backpropagation or a Clock

arXiv.org Machine Learning

Reinforcement learning (RL) algorithms share qualitative similarities with the algorithms implemented byanimal brains. However, there remain clear differences between these two types of algorithms. For example, while RL algorithms using artificial neural networks require information to flow backwards through the network via the backpropagation algorithm, there is currently debate about whether this is feasible in biological neural implementations (Werbos and Davis, 2016). Policy gradient coagent networks (PGCNs) are a class of RL algorithms that were introduced to remove this possibly biologically implausible property of RL algorithms--they use artificial neural networks but do not use the backpropagation algorithm (Thomas, 2011). Since their introduction, PGCN algorithms have proven to be not only a possible improvement in biological plausibility, but a practical tool for improving RL agents.