Goto

Collaborating Authors

Neural and Synaptic Array Transceiver: A Brain-Inspired Computing Framework for Embedded Learning

arXiv.org Artificial Intelligence

Embedded, continual learning for autonomous and adaptive behavior is a key application of neuromorphic hardware. However, neuromorphic implementations of embedded learning at large scales that are both flexible and efficient have been hindered by a lack of a suitable algorithmic framework. As a result, the most neuromorphic hardware is trained off-line on large clusters of dedicated processors or GPUs and transferred post hoc to the device. We address this by introducing the neural and synaptic array transceiver (NSAT), a neuromorphic computational framework facilitating flexible and efficient embedded learning by matching algorithmic requirements and neural and synaptic dynamics. NSAT supports event-driven supervised, unsupervised and reinforcement learning algorithms including deep learning. We demonstrate the NSAT in a wide range of tasks, including the simulation of Mihalas-Niebur neuron, dynamic neural fields, event-driven random back-propagation for event-based deep learning, event-based contrastive divergence for unsupervised learning, and voltage-based learning rules for sequence learning. We anticipate that this contribution will establish the foundation for a new generation of devices enabling adaptive mobile systems, wearable devices, and robots with data-driven autonomy.


Neuromorphic Processing and Sensing: Evolutionary Progression of AI to Spiking

arXiv.org Artificial Intelligence

The increasing rise in machine learning and deep learning applications is requiring ever more computational resources to successfully meet the growing demands of an always-connected, automated world. Neuromorphic technologies based on Spiking Neural Network algorithms hold the promise to implement advanced artificial intelligence using a fraction of the computations and power requirements by modeling the functioning, and spiking, of the human brain. With the proliferation of tools and platforms aiding data scientists and machine learning engineers to develop the latest innovations in artificial and deep neural networks, a transition to a new paradigm will require building from the current well-established foundations. This paper explains the theoretical workings of neuromorphic technologies based on spikes, and overviews the state-of-art in hardware processors, software platforms and neuromorphic sensing devices. A progression path is paved for current machine learning specialists to update their skillset, as well as classification or predictive models from the current generation of deep neural networks to SNNs. This can be achieved by leveraging existing, specialized hardware in the form of SpiNNaker and the Nengo migration toolkit. First-hand, experimental results of converting a VGG-16 neural network to an SNN are shared. A forward gaze into industrial, medical and commercial applications that can readily benefit from SNNs wraps up this investigation into the neuromorphic computing future.


Deep Learning versus Biological Neurons: floating-point numbers, spikes, and neurotransmitters

#artificialintelligence

In recent years, "deep learning" AI models have often been touted as "working like the brain," in that they are composed of artificial neurons mimicking those of biological brains. From the perspective of a neuroscientist, however, the differences between deep learning neurons and biological neurons are numerous and distinct. In this post we'll start by describing a few key characteristics of biological neurons, and how they are simplified to obtain deep learning neurons. We'll then speculate on how these differences impose limits on deep learning networks, and how movement toward more realistic models of biological neurons might advance AI as we currently know it. Typical biological neurons are individual cells, each composed of the main body of the cell along with many tendrils that extend from that body.


Exploring Deep Spiking Neural Networks for Automated Driving Applications

arXiv.org Machine Learning

Neural networks have become the standard model for various computer vision tasks in automated driving including semantic segmentation, moving object detection, depth estimation, visual odometry, etc. The main flavors of neural networks which are used commonly are convolutional (CNN) and recurrent (RNN). In spite of rapid progress in embedded processors, power consumption and cost is still a bottleneck. Spiking Neural Networks (SNNs) are gradually progressing to achieve low-power event-driven hardware architecture which has a potential for high efficiency. In this paper, we explore the role of deep spiking neural networks (SNN) for automated driving applications. We provide an overview of progress on SNN and argue how it can be a good fit for automated driving applications.


Training spiking multi-layer networks with surrogate gradients on an analog neuromorphic substrate

arXiv.org Machine Learning

Spiking neural networks are nature's solution for parallel information processing with high temporal precision at a low metabolic energy cost. To that end, biological neurons integrate inputs as an analog sum and communicate their outputs digitally as spikes, i.e., sparse binary events in time. These architectural principles can be mirrored effectively in analog neuromorphic hardware. Nevertheless, training spiking neural networks with sparse activity on hardware devices remains a major challenge. Primarily this is due to the lack of suitable training methods that take into account device-specific imperfections and operate at the level of individual spikes instead of firing rates. To tackle this issue, we developed a hardware-in-the-loop strategy to train multi-layer spiking networks using surrogate gradients on the analog BrainScales-2 chip. Specifically, we used the hardware to compute the forward pass of the network, while the backward pass was computed in software. We evaluated our approach on downscaled 16x16 versions of the MNIST and the fashion MNIST datasets in which spike latencies encoded pixel intensities. The analog neuromorphic substrate closely matched the performance of equivalently sized networks implemented in software. It is capable of processing 70 k patterns per second with a power consumption of less than 300 mW. Added activity regularization resulted in sparse network activity with about 20 spikes per input, at little to no reduction in classification performance. Thus, overall, our work demonstrates low-energy spiking network processing on an analog neuromorphic substrate and sets several new benchmarks for hardware systems in terms of classification accuracy, processing speed, and efficiency. Importantly, our work emphasizes the value of hardware-in-the-loop training and paves the way toward energy-efficient information processing on non-von-Neumann architectures.