Collaborating Authors

Training Multilayer Spiking Neural Networks using NormAD based Spatio-Temporal Error Backpropagation Machine Learning

Spiking neural networks (SNNs) have garnered a great amount of interest for supervised and unsupervised learning applications. This paper deals with the problem of training multilayer feedforward SNNs. The non-linear integrate-and-fire dynamics employed by spiking neurons make it difficult to train SNNs to output desired spike train in response to a given input. To tackle this, first the problem of training a multilayer SNN is formulated as an optimization problem such that its objective function is based on the deviation in membrane potential rather than the spike arrival instants. Then, an optimization method named Normalized Approximate Descent (NormAD), hand-crafted for such non-convex optimization problems, is employed to derive the iterative synaptic weight update rule. Next it is reformulated for a more efficient implementation, which can also be interpreted to be spatio-temporal error backpropagation. The learning rule is validated by employing it to solve generic spike based training problem as well as a spike based formulation of the XOR problem. Thus, the new algorithm is a key step towards building deep spiking neural networks capable of event-triggered learning.

Deep Learning in Spiking Neural Networks Artificial Intelligence

Deep learning approaches have shown remarkable performance in many areas of pattern recognition recently. In spite of their power in hierarchical feature extraction and classification, this type of neural network is computationally expensive and difficult to implement on hardware for portable devices. In an other vein of research on neural network architectures, spiking neural networks (SNNs) have been described as power-efficient models because of their sparse, spike-based communication framework. SNNs are brain-inspired such that they seek to mimic the accurate and efficient functionality of the brain. Recent studies try to take advantages of the both frameworks (deep learning and SNNs) to develop a deep architecture of SNNs to achieve high performance of recently proved deep networks while implementing bio-inspired, power-efficient platforms. Additionally, As the brain process different stimuli patterns through multi-layer SNNs that are communicating by spike trains via adaptive synapses, developing artificial deep SNNs can also be very helpful for understudying the computations done by biological neural circuits. Having both computational and experimental backgrounds, we are interested in including a comprehensive summary of recent advances in developing deep SNNs that may assist computer scientists interested in developing more advanced and efficient networks and help experimentalists to frame new hypotheses for neural information processing in the brain using a more realistic model.

Gradient Descent for Spiking Neural Networks Machine Learning

Much of studies on neural computation are based on network models of static neurons that produce analog output, despite the fact that information processing in the brain is predominantly carried out by dynamic neurons that produce discrete pulses called spikes. Research in spike-based computation has been impeded by the lack of efficient supervised learning algorithm for spiking networks. Here, we present a gradient descent method for optimizing spiking network models by introducing a differentiable formulation of spiking networks and deriving the exact gradient calculation. For demonstration, we trained recurrent spiking networks on two dynamic tasks: one that requires optimizing fast (~millisecond) spike-based interactions for efficient encoding of information, and a delayed memory XOR task over extended duration (~second). The results show that our method indeed optimizes the spiking network dynamics on the time scale of individual spikes as well as behavioral time scales. In conclusion, our result offers a general purpose supervised learning algorithm for spiking neural networks, thus advancing further investigations on spike-based computation.

A Supervised STDP-based Training Algorithm for Living Neural Networks Machine Learning

Neural networks have shown great potential in many applications like speech recognition, drug discovery, image classification, and object detection. Neural network models are inspired by biological neural networks, but they are optimized to perform machine learning tasks on digital computers. The proposed work explores the possibilities of using living neural networks in vitro as basic computational elements for machine learning applications. A new supervised STDP-based learning algorithm is proposed in this work, which considers neuron engineering constrains. A 74.7% accuracy is achieved on the MNIST benchmark for handwritten digit recognition.

Enforcing balance allows local supervised learning in spiking recurrent networks

Neural Information Processing Systems

To predict sensory inputs or control motor trajectories, the brain must constantly learn temporal dynamics based on error feedback. However, it remains unclear how such supervised learning is implemented in biological neural networks. Learning in recurrent spiking networks is notoriously difficult because local changes in connectivity may have an unpredictable effect on the global dynamics. The most commonly used learning rules, such as temporal back-propagation, are not local and thus not biologically plausible. Furthermore, reproducing the Poisson-like statistics of neural responses requires the use of networks with balanced excitation and inhibition. Such balance is easily destroyed during learning. Using a top-down approach, we show how networks of integrate-and-fire neurons can learn arbitrary linear dynamical systems by feeding back their error as a feed-forward input. The network uses two types of recurrent connections: fast and slow. The fast connections learn to balance excitation and inhibition using a voltage-based plasticity rule. The slow connections are trained to minimize the error feedback using a current-based Hebbian learning rule. Importantly, the balance maintained by fast connections is crucial to ensure that global error signals are available locally in each neuron, in turn resulting in a local learning rule for the slow connections. This demonstrates that spiking networks can learn complex dynamics using purely local learning rules, using E/I balance as the key rather than an additional constraint. The resulting network implements a given function within the predictive coding scheme, with minimal dimensions and activity.