Cherubini, Giovanni
High-performance deep spiking neural networks with 0.3 spikes per neuron
Stanojevic, Ana, Woźniak, Stanisław, Bellec, Guillaume, Cherubini, Giovanni, Pantazi, Angeliki, Gerstner, Wulfram
Communication by rare, binary spikes is a key factor for the energy efficiency of biological brains. However, it is harder to train biologically-inspired spiking neural networks (SNNs) than artificial neural networks (ANNs). This is puzzling given that theoretical results provide exact mapping algorithms from ANNs to SNNs with time-to-first-spike (TTFS) coding. In this paper we analyze in theory and simulation the learning dynamics of TTFS-networks and identify a specific instance of the vanishing-or-exploding gradient problem. While two choices of SNN mappings solve this problem at initialization, only the one with a constant slope of the neuron membrane potential at threshold guarantees the equivalence of the training trajectory between SNNs and ANNs with rectified linear units. We demonstrate that training deep SNN models achieves the exact same performance as that of ANNs, surpassing previous SNNs on image classification datasets such as MNIST/Fashion-MNIST, CIFAR10/CIFAR100 and PLACES365. Our SNN accomplishes high-performance classification with less than 0.3 spikes per neuron, lending itself for an energy-efficient implementation. We show that fine-tuning SNNs with our robust gradient descent algorithm enables their optimization for hardware implementations with low latency and resilience to noise and quantization.
Factorizers for Distributed Sparse Block Codes
Hersche, Michael, Terzic, Aleksandar, Karunaratne, Geethan, Langenegger, Jovin, Pouget, Angéline, Cherubini, Giovanni, Benini, Luca, Sebastian, Abu, Rahimi, Abbas
Distributed sparse block codes (SBCs) exhibit compact representations for encoding and manipulating symbolic data structures using fixed-with vectors. One major challenge however is to disentangle, or factorize, such data structures into their constituent elements without having to search through all possible combinations. This factorization becomes more challenging when queried by noisy SBCs wherein symbol representations are relaxed due to perceptual uncertainty and approximations made when modern neural networks are used to generate the query vectors. To address these challenges, we first propose a fast and highly accurate method for factorizing a more flexible and hence generalized form of SBCs, dubbed GSBCs. Our iterative factorizer introduces a threshold-based nonlinear activation, a conditional random sampling, and an $\ell_\infty$-based similarity metric. Its random sampling mechanism in combination with the search in superposition allows to analytically determine the expected number of decoding iterations, which matches the empirical observations up to the GSBC's bundling capacity. Secondly, the proposed factorizer maintains its high accuracy when queried by noisy product vectors generated using deep convolutional neural networks (CNNs). This facilitates its application in replacing the large fully connected layer (FCL) in CNNs, whereby C trainable class vectors, or attribute combinations, can be implicitly represented by our factorizer having F-factor codebooks, each with $\sqrt[\leftroot{-2}\uproot{2}F]{C}$ fixed codevectors. We provide a methodology to flexibly integrate our factorizer in the classification layer of CNNs with a novel loss function. We demonstrate the feasibility of our method on four deep CNN architectures over CIFAR-100, ImageNet-1K, and RAVEN datasets. In all use cases, the number of parameters and operations are significantly reduced compared to the FCL.
An Exact Mapping From ReLU Networks to Spiking Neural Networks
Stanojevic, Ana, Woźniak, Stanisław, Bellec, Guillaume, Cherubini, Giovanni, Pantazi, Angeliki, Gerstner, Wulfram
Energy consumption of deep artificial neural networks (ANNs) with thousands of neurons poses a problem not only during training [1], but also during inference [2]. Among other alternatives [3, 4, 5], hardware implementations of spiking neural networks (SNNs) [6, 7, 8, 9, 10] have been proposed as an energy-efficient solution, not only for large centralized applications, but also for computing in edge devices [11, 12, 13]. In SNNs neurons communicate by ultra-short pulses, called action potentials or spikes, that can be considered as point-like events in continuous time. In deep multi-layer SNNs, if a neuron in layer n fires a spike, this event causes a change in the voltage trajectory of neurons in layer n + 1. If, after some time, the trajectory of a neuron in layer n + 1 reaches a threshold value, then this neuron fires a spike. While there is no general consensus on how to best decode spike trains in biology [14, 15, 16], multiple pieces of evidence indicate that immediately after an onset of a stimulus, populations of neurons in auditory, visual, or tactile sensory areas respond in such a way that the timing of the first spike of each neuron after stimulus onset contains a high amount of information about the stimulus features [17, 18, 19]. These and similar observations have triggered the idea that, immediately after stimulus onset, an initial wave of activity is triggered and travels across several brain areas in the sensory processing stream [20, 21, 22, 23, 24]. We take inspiration from these observations and assume in this paper that information is encoded in the exact spike times of each neuron and that spikes are transmitted in a wave-like manner across layers of a deep feedforward neural network. Specifically, we use coding by time-to-first-spike (TTFS) [15], a timing-based code originally proposed in neuroscience [15, 17, 18, 22], which has recently attracted substantial attention in the context of neuromorphic implementations [8, 9, 10, 25, 26, 27, 28, 29, 30].
In-memory hyperdimensional computing
Karunaratne, Geethan, Gallo, Manuel Le, Cherubini, Giovanni, Benini, Luca, Rahimi, Abbas, Sebastian, Abu
Hyperdimensional computing (HDC) is an emerging computing framework that takes inspiration from attributes of neuronal circuits such as hyperdimensionality, fully distributed holographic representation, and (pseudo)randomness. When employed for machine learning tasks such as learning and classification, HDC involves manipulation and comparison of large patterns within memory. Moreover, a key attribute of HDC is its robustness to the imperfections associated with the computational substrates on which it is implemented. It is therefore particularly amenable to emerging non-von Neumann paradigms such as in-memory computing, where the physical attributes of nanoscale memristive devices are exploited to perform computation in place. Here, we present a complete in-memory HDC system that achieves a near-optimum trade-off between design complexity and classification accuracy based on three prototypical HDC related learning tasks, namely, language classification, news classification, and hand gesture recognition from electromyography signals. Comparable accuracies to software implementations are demonstrated, experimentally, using 760,000 phase-change memory devices performing analog in-memory computing.