Investigating the Scalability and Biological Plausibility of the Activation Relaxation Algorithm
Millidge, Beren, Tschantz, Alexander, Seth, Anil, Buckley, Christopher L
–arXiv.org Artificial Intelligence
The recently proposed Activation Relaxation (AR) algorithm provides a simple and robust approach for approximating the backpropagation of error algorithm using only local learning rules. We have previously shown that the algorithm can be further simplified and made more biologically plausible by (i) introducing a learnable set of backwards weights, which overcomes the weight-transport problem, and (ii) avoiding the computation of nonlinear derivatives at each neuron. However, tthe efficacy of these simplifications has, so far, only been tested on simple multi-layer-perceptron (MLP) networks. Here, we show that these simplifications still maintain performance using more complex CNN architectures and challenging datasets, which have proven difficult for other biologically-plausible schemes to scale to. We also investigate whether another biologically implausible assumption of the original AR algorithm - the frozen feedforward pass - can be relaxed without damaging performance. The backpropagation of error algorithm (backprop) has been the engine driving the successes of modern machine learning with deep neural networks.
arXiv.org Artificial Intelligence
Oct-13-2020