The Predictive Forward-Forward Algorithm

Ororbia, Alexander, Mali, Ankur

arXiv.org Artificial Intelligence 

The algorithm known as backpropagation of errors [59, 32], or "backprop" for short, has long faced criticism concerning its neurobiological plausibility [10, 14, 56, 35, 15]. Despite powering the tremendous progress and success behind deep learning and its every-growing myriad of promising applications [57, 12], it is improbable that backprop is a viable model of learning in the brain, such as in cortical regions. Notably, there are both practical and biophysical issues [15, 35], and, among these issues, there is a lack of evidence that: 1) neural activities are explicitly stored to be used later for synaptic adjustment, 2) error derivatives are backpropagated along a global feedback pathway to generate teaching signals, 3) the error signals move back along the same neural pathways used to forward propagate information, and, 4) inference and learning are locked to be largely sequential (instead of massively parallel). Furthermore, when processing temporal data, it is certainly not the case that the neural circuitry of the brain is unfolded backward through time to adjust synapses [42] (as in backprop through time). Recently, there has been a growing interest in the research domain of brain-inspired computing, which focuses on developing algorithms and computational models that attempt to circumvent or resolve critical issues such as those highlighted above. Among the most powerful and promising ones is predictive coding (PC) [18, 48, 13, 4, 51, 41], and among the most recent ones is the forward-forward (FF) algorithm [19]. These alternatives offer different means of conducting credit assignments with performance similar to backprop, but to the contrary, are more likely consistent with and similar to real biological neuron learning (see Figure 1 for a graphical depiction and comparison of respective credit assignment setups). This paper will propose a novel model and learning process, the predictive forward-forward (PFF) process, that generalizes and combines FF and PC into a robust stochastic neural system that simultaneously learns a representation and generative model in a biologically-plausible fashion. Like the FF algorithm, the PFF procedure offers a promising, potentially helpful model of biological neural circuits, a potential candidate system for low-power analog hardware and neuromorphic circuits, and a potential backprop-alternative worthy of future investigation and study.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found