Activation Relaxation: A Local Dynamical Approximation to Backpropagation in the Brain
Millidge, Beren, Tschantz, Alexander, Seth, Anil K, Buckley, Christopher L
–arXiv.org Artificial Intelligence
Can the powerful backpropagation of error (backprop) reinforcement learning algorithm be formulated in a manner suitable for implementation in neural circuitry? The primary challenge is to ensure that any candidate formulation uses only local information, rather than relying on global (error) signals, as in orthodox backprop. Recently several algorithms for approximating backprop using only local signals, such as predictive coding and equilibrium-prop, have been proposed. However, these algorithms typically impose other requirements which challenge biological plausibility: for example, requiring complex and precise connectivity schemes (predictive coding), or multiple sequential backwards phases with information being stored across phases (equilibrium-prop). Here, we propose a novel local algorithm, Activation Relaxation (AR), which is motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system. Our algorithm converges robustly and exactly to the correct backpropagation gradients, requires only a single type of neuron, utilises only a single backwards phase, and can perform credit assignment on arbitrary computation graphs. We illustrate these properties by training deep neural networks on visual classification tasks, and we describe simplifications to the algorithm which remove further obstacles to neurobiological implementation (for example, the weight-transport problem, and the use of nonlinear derivatives), while preserving performance.
arXiv.org Artificial Intelligence
Sep-15-2020
- Genre:
- Research Report (0.64)
- Industry:
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Technology: