Sheldon, Forrest
The Backpropagation Algorithm Implemented on Spiking Neuromorphic Hardware
Renner, Alpha, Sheldon, Forrest, Zlotnik, Anatoly, Tao, Louis, Sornborger, Andrew
There is particular interest in Spike-based learning in plastic neuronal networks is deep learning, which is a central tool in modern machine playing increasingly key roles in both theoretical neuroscience learning. Deep learning relies on a layered, feedforward and neuromorphic computing. The brain learns network similar to the early layers of the visual cortex, in part by modifying the synaptic strengths between neurons with threshold nonlinearities at each layer that resemble and neuronal populations. While specific synaptic mean-field approximations of neuronal integrate-and-fire plasticity or neuromodulatory mechanisms may vary in models. While feedforward networks are readily translated different brain regions, it is becoming clear that a significant to neuromorphic hardware [6-8], the far more computationally level of dynamical coordination between disparate intensive training of these networks'on chip' neuronal populations must exist, even within an individual has proven elusive as the structure of backpropagation neural circuit [1]. Classically, backpropagation (BP, makes the algorithm notoriously difficult to implement and other learning algorithms) has been essential for supervised in a neural circuit [9, 10]. A feasible neural implementation learning in artificial neural networks (ANNs). of the backpropagation algorithm has gained renewed Although the question of whether or not BP operates in scrutiny with the rise of new neuromorphic computational the brain is still an outstanding issue [2], BP does solve architectures that feature local synaptic plasticity the problem of how a global objective function can be [5, 11-13]. Because of the well-known difficulties, neuromorphic related to local synaptic modification in a network.