Yes you should understand backprop – Andrej Karpathy – Medium

#artificialintelligence

When we offered CS231n (Deep Learning class) at Stanford, we intentionally designed the programming assignments to include explicit calculations involved in backpropagation on the lowest level. The students had to implement the forward and the backward pass of each layer in raw numpy. This is seemingly a perfectly sensible appeal - if you're never going to write backward passes once the class is over, why practice writing them? Are we just torturing the students for our own amusement? Some easy answers could make arguments along the lines of "it's worth knowing what's under the hood as an intellectual curiosity", or perhaps "you might want to improve on the core algorithm later", but there is a much stronger and practical argument, which I wanted to devote a whole post to: The problem with Backpropagation is that it is a leaky abstraction.


A history of artificial intelligence in 10 landmarks

#artificialintelligence

Sometimes abbreviated to "backprop," backpropagation is the single most important algorithm in the history of machine learning. The idea behind it was first proposed in 1969, although it only became a mainstream part of machine learning in the mid-1980s. What backpropagation does is to allow a neural network to adjust its hidden layers in the event that the output it comes up doesn't match the one its creator is hoping for. In short, it means that creators can train their networks to perform better by correcting them when they make mistakes. When this is done, backprop modifies the different connections in the neural network to make sure it gets the answer right the next time it faces the same problem.


Yes you should understand backprop

#artificialintelligence

When we offered CS231n (Deep Learning class) at Stanford, we intentionally designed the programming assignments to include explicit calculations involved in backpropagation on the lowest level. The students had to implement the forward and the backward pass of each layer in raw numpy. This is seemingly a perfectly sensible appeal - if you're never going to write backward passes once the class is over, why practice writing them? Are we just torturing the students for our own amusement? Some easy answers could make arguments along the lines of "it's worth knowing what's under the hood as an intellectual curiosity", or perhaps "you might want to improve on the core algorithm later", but there is a much stronger and practical argument, which I wanted to devote a whole post to: In other words, it is easy to fall into the trap of abstracting away the learning process -- believing that you can simply stack arbitrary layers together and backprop will "magically make them work" on your data.


Backprop-Q: Generalized Backpropagation for Stochastic Computation Graphs

arXiv.org Artificial Intelligence

In real-world scenarios, it is appealing to learn a model carrying out stochastic operations internally, known as stochastic computation graphs (SCGs), rather than learning a deterministic mapping. However, standard backpropagation is not applicable to SCGs. We attempt to address this issue from the angle of cost propagation, with local surrogate costs, called Q-functions, constructed and learned for each stochastic node in an SCG. Then, the SCG can be trained based on these surrogate costs using standard backpropagation. We propose the entire framework as a solution to generalize backpropagation for SCGs, which resembles an actor-critic architecture but based on a graph. For broad applicability, we study a variety of SCG structures from one cost to multiple costs. We utilize recent advances in reinforcement learning (RL) and variational Bayes (VB), such as off-policy critic learning and unbiased-and-low-variance gradient estimation, and review them in the context of SCGs. The generalized backpropagation extends transported learning signals beyond gradients between stochastic nodes while preserving the benefit of backpropagating gradients through deterministic nodes. Experimental suggestions and concerns are listed to help design and test any specific model using this framework.


Improving Back-Propagation by Adding an Adversarial Gradient

arXiv.org Machine Learning

The back-propagation algorithm is widely used for learning in artificial neural networks. A challenge in machine learning is to create models that generalize to new data samples not seen in the training data. Recently, a common flaw in several machine learning algorithms was discovered: small perturbations added to the input data lead to consistent misclassification of data samples. Samples that easily mislead the model are called adversarial examples. Training a "maxout" network on adversarial examples has shown to decrease this vulnerability, but also increase classification performance. This paper shows that adversarial training has a regularizing effect also in networks with logistic, hyperbolic tangent and rectified linear units. A simple extension to the back-propagation method is proposed, that adds an adversarial gradient to the training. The extension requires an additional forward and backward pass to calculate a modified input sample, or mini batch, used as input for standard back-propagation learning. The first experimental results on MNIST show that the "adversarial back-propagation" method increases the resistance to adversarial examples and boosts the classification performance. The extension reduces the classification error on the permutation invariant MNIST from 1.60% to 0.95% in a logistic network, and from 1.40% to 0.78% in a network with rectified linear units. Results on CIFAR-10 indicate that the method has a regularizing effect similar to dropout in fully connected networks. Based on these promising results, adversarial back-propagation is proposed as a stand-alone regularizing method that should be further investigated.