Benchmarking Keyword Spotting Efficiency on Neuromorphic Hardware

arXiv.org Machine Learning

Using Intel's Loihi neuromorphic research chip and ABR's Nengo Deep Learning toolkit, we analyze the inference speed, dynamic power consumption, and energy cost per inference of a two-layer neural network keyword spotter trained to recognize a single phrase. We perform comparative analyses of this keyword spotter running on more conventional hardware devices including a CPU, a GPU, Nvidia's Jetson TX1, and the Movidius Neural Compute Stick. Our results indicate that for this inference application, Loihi outperforms all of these alternatives on an energy cost per inference basis while maintaining near-equivalent inference accuracy. Furthermore, an analysis of tradeoffs between network size, inference speed, and energy cost indicates that Loihi's comparative advantage over other low-power computing devices improves for larger networks.


NengoDL: Combining deep learning and neuromorphic modelling methods

arXiv.org Artificial Intelligence

NengoDL is a software framework designed to combine the strengths of neuromorphic modelling and deep learning. NengoDL allows users to construct biologically detailed neural models, intermix those models with deep learning elements (such as convolutional networks), and then efficiently simulate those models in an easy-to-use, unified framework. In addition, NengoDL allows users to apply deep learning training methods to optimize the parameters of biological neural models. In this paper we present basic usage examples, benchmarking, and details on the key implementation elements of NengoDL. More details can be found at https://www.nengo.ai/nengo-dl .


Neuromorphic Deep Learning Machines

arXiv.org Artificial Intelligence

An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated weights are not essential for learning deep representations. Random BP replaces feedback weights with random ones and encourages the network to adjust its feed-forward weights to learn pseudo-inverses of the (random) feedback weights. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations in neuromorphic computing hardware. The rule requires only one addition and two comparisons for each synaptic weight using a two-compartment leaky Integrate & Fire (I&F) neuron, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving nearly identical classification accuracies compared to artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.


Composing Neural Algorithms with Fugu

arXiv.org Artificial Intelligence

Neuromorphic hardware architectures represent a growing family of potential post-Moore's Law Era platforms. Largely due to event-driving processing inspired by the human brain, these computer platforms can offer significant energy benefits compared to traditional von Neumann processors. Unfortunately there still remains considerable difficulty in successfully programming, configuring and deploying neuromorphic systems. We present the Fugu framework as an answer to this need. Rather than necessitating a developer attain intricate knowledge of how to program and exploit spiking neural dynamics to utilize the potential benefits of neuromorphic computing, Fugu is designed to provide a higher level abstraction as a hardware-independent mechanism for linking a variety of scalable spiking neural algorithms from a variety of sources. Individual kernels linked together provide sophisticated processing through compositionality. Fugu is intended to be suitable for a wide-range of neuromorphic applications, including machine learning, scientific computing, and more brain-inspired neural algorithms. Ultimately, we hope the community adopts this and other open standardization attempts allowing for free exchange and easy implementations of the ever-growing list of spiking neural algorithms.


Deep Learning in Spiking Neural Networks

arXiv.org Artificial Intelligence

Deep learning approaches have shown remarkable performance in many areas of pattern recognition recently. In spite of their power in hierarchical feature extraction and classification, this type of neural network is computationally expensive and difficult to implement on hardware for portable devices. In an other vein of research on neural network architectures, spiking neural networks (SNNs) have been described as power-efficient models because of their sparse, spike-based communication framework. SNNs are brain-inspired such that they seek to mimic the accurate and efficient functionality of the brain. Recent studies try to take advantages of the both frameworks (deep learning and SNNs) to develop a deep architecture of SNNs to achieve high performance of recently proved deep networks while implementing bio-inspired, power-efficient platforms. Additionally, As the brain process different stimuli patterns through multi-layer SNNs that are communicating by spike trains via adaptive synapses, developing artificial deep SNNs can also be very helpful for understudying the computations done by biological neural circuits. Having both computational and experimental backgrounds, we are interested in including a comprehensive summary of recent advances in developing deep SNNs that may assist computer scientists interested in developing more advanced and efficient networks and help experimentalists to frame new hypotheses for neural information processing in the brain using a more realistic model.