Goto

Collaborating Authors

 gait-prop


GAIT-prop: A biologically plausible learning rule derived from backpropagation of error

Neural Information Processing Systems

Traditional backpropagation of error, though a highly successful algorithm for learning in artificial neural network models, includes features which are biologically implausible for learning in real neural circuits. An alternative called target propagation proposes to solve this implausibility by using a top-down model of neural activity to convert an error at the output of a neural network into layer-wise and plausible'targets' for every unit. These targets can then be used to produce weight updates for network training. However, thus far, target propagation has been heuristically proposed without demonstrable equivalence to backpropagation. Here, we derive an exact correspondence between backpropagation and a modified form of target propagation (GAIT-prop) where the target is a small perturbation of the forward pass. Specifically, backpropagation and GAIT-prop give identical updates when synaptic weight matrices are orthogonal. In a series of simple computer vision experiments, we show near-identical performance between backpropagation and GAIT-prop with a soft orthogonality-inducing regularizer.






Review for NeurIPS paper: GAIT-prop: A biologically plausible learning rule derived from backpropagation of error

Neural Information Processing Systems

I believe this paper makes a meaningful contribution to this line of work and have changed my score accordingly to support acceptance. I do have a few comments that I hope you will consider as you prepare a final version of this paper, mainly coming from a neuroscience perspective. While the method described in this paper advances the family of target prop-related models and may serve as a foundation for future work in bio-plausible learning models, I don't think it is appropriate to describe it as more biologically plausible than backpropagation. One of the commonly cited biologically implausible features of backpropagation (weight symmetry) is replaced here by an equally implausible mechanism (perfect inverse models). It is true that bio-plausible ways of approximating inverses may exist, but there are also proposals for bio-plausible ways of maintaining weight symmetry (e.g.


Review for NeurIPS paper: GAIT-prop: A biologically plausible learning rule derived from backpropagation of error

Neural Information Processing Systems

This paper presents a biologically plausible learning rule as an alternative to standard back-propagation. This is a heavily studied area in ML, with strong interest from both the ML and computational neuroscience communities. The reviewers agreed that this work presents an exciting and important contribution over the existing literature on this problem. There was extensive discussion between reviewers, with two reviewers championing the paper for acceptance. The lower scoring reviewers cited the empirical evaluation as a weakness of the paper, while others argued that the idea on its own was sufficiently interesting to the community.


GAIT-prop: A biologically plausible learning rule derived from backpropagation of error

Neural Information Processing Systems

Traditional backpropagation of error, though a highly successful algorithm for learning in artificial neural network models, includes features which are biologically implausible for learning in real neural circuits. An alternative called target propagation proposes to solve this implausibility by using a top-down model of neural activity to convert an error at the output of a neural network into layer-wise and plausible'targets' for every unit. These targets can then be used to produce weight updates for network training. However, thus far, target propagation has been heuristically proposed without demonstrable equivalence to backpropagation. Here, we derive an exact correspondence between backpropagation and a modified form of target propagation (GAIT-prop) where the target is a small perturbation of the forward pass.


Scaling up learning with GAIT-prop

Dalm, Sander, Ahmad, Nasir, Ambrogioni, Luca, van Gerven, Marcel

arXiv.org Artificial Intelligence

Backpropagation of error (BP) is a widely used and highly successful learning algorithm. However, its reliance on non-local information in propagating error gradients makes it seem an unlikely candidate for learning in the brain. In the last decade, a number of investigations have been carried out focused upon determining whether alternative more biologically plausible computations can be used to approximate BP. This work builds on such a local learning algorithm - Gradient Adjusted Incremental Target Propagation (GAIT-prop) - which has recently been shown to approximate BP in a manner which appears biologically plausible. This method constructs local, layer-wise weight update targets in order to enable plausible credit assignment. However, in deep networks, the local weight updates computed by GAIT-prop can deviate from BP for a number of reasons. Here, we provide and test methods to overcome such sources of error. In particular, we adaptively rescale the locally-computed errors and show that this significantly increases the performance and stability of the GAIT-prop algorithm when applied to the CIFAR-10 dataset.

  Country:
  Genre: Research Report (0.40)
  Industry: Energy > Oil & Gas (1.00)