Khacef, Lyes
Towards efficient keyword spotting using spike-based time difference encoders
Pequeño-Zurro, Alejandro, Khacef, Lyes, Panzeri, Stefano, Chicca, Elisabetta
Keyword spotting in edge devices is becoming increasingly important as voice-activated assistants are widely used. However, its deployment is often limited by the extreme low-power constraints of the target embedded systems. Here, we explore the Temporal Difference Encoder (TDE) performance in keyword spotting. This recent neuron model encodes the time difference in instantaneous frequency and spike count to perform efficient keyword spotting with neuromorphic processors. We use the TIdigits dataset of spoken digits with a formant decomposition and rate-based encoding into spikes. We compare three Spiking Neural Networks (SNNs) architectures to learn and classify spatio-temporal signals. The proposed SNN architectures are made of three layers with variation in its hidden layer composed of either (1) feedforward TDE, (2) feedforward Current-Based Leaky Integrate-and-Fire (CuBa-LIF), or (3) recurrent CuBa-LIF neurons. We first show that the spike trains of the frequency-converted spoken digits have a large amount of information in the temporal domain, reinforcing the importance of better exploiting temporal encoding for such a task. We then train the three SNNs with the same number of synaptic weights to quantify and compare their performance based on the accuracy and synaptic operations. The resulting accuracy of the feedforward TDE network (89%) is higher than the feedforward CuBa-LIF network (71%) and close to the recurrent CuBa-LIF network (91%). However, the feedforward TDE-based network performs 92% fewer synaptic operations than the recurrent CuBa-LIF network with the same amount of synapses. In addition, the results of the TDE network are highly interpretable and correlated with the frequency and timescale features of the spoken keywords in the dataset. Our findings suggest that the TDE is a promising neuron model for scalable event-driven processing of spatio-temporal patterns.
NeuroBench: Advancing Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking
Yik, Jason, Ahmed, Soikat Hasan, Ahmed, Zergham, Anderson, Brian, Andreou, Andreas G., Bartolozzi, Chiara, Basu, Arindam, Blanken, Douwe den, Bogdan, Petrut, Bohte, Sander, Bouhadjar, Younes, Buckley, Sonia, Cauwenberghs, Gert, Corradi, Federico, de Croon, Guido, Danielescu, Andreea, Daram, Anurag, Davies, Mike, Demirag, Yigit, Eshraghian, Jason, Forest, Jeremy, Furber, Steve, Furlong, Michael, Gilra, Aditya, Indiveri, Giacomo, Joshi, Siddharth, Karia, Vedant, Khacef, Lyes, Knight, James C., Kriener, Laura, Kubendran, Rajkumar, Kudithipudi, Dhireesha, Lenz, Gregor, Manohar, Rajit, Mayr, Christian, Michmizos, Konstantinos, Muir, Dylan, Neftci, Emre, Nowotny, Thomas, Ottati, Fabrizio, Ozcelikkale, Ayca, Pacik-Nelson, Noah, Panda, Priyadarshini, Pao-Sheng, Sun, Payvand, Melika, Pehle, Christian, Petrovici, Mihai A., Posch, Christoph, Renner, Alpha, Sandamirskaya, Yulia, Schaefer, Clemens JS, van Schaik, André, Schemmel, Johannes, Schuman, Catherine, Seo, Jae-sun, Sheik, Sadique, Shrestha, Sumit Bam, Sifalakis, Manolis, Sironi, Amos, Stewart, Kenneth, Stewart, Terrence C., Stratmann, Philipp, Tang, Guangzhi, Timcheck, Jonathan, Verhelst, Marian, Vineyard, Craig M., Vogginger, Bernhard, Yousefzadeh, Amirreza, Zhou, Biyan, Zohora, Fatima Tuz, Frenkel, Charlotte, Reddi, Vijay Janapa
The field of neuromorphic computing holds great promise in terms of advancing computing efficiency and capabilities by following brain-inspired principles. However, the rich diversity of techniques employed in neuromorphic research has resulted in a lack of clear standards for benchmarking, hindering effective evaluation of the advantages and strengths of neuromorphic methods compared to traditional deep-learning-based methods. This paper presents a collaborative effort, bringing together members from academia and the industry, to define benchmarks for neuromorphic computing: NeuroBench. The goals of NeuroBench are to be a collaborative, fair, and representative benchmark suite developed by the community, for the community. In this paper, we discuss the challenges associated with benchmarking neuromorphic solutions, and outline the key features of NeuroBench. We believe that NeuroBench will be a significant step towards defining standards that can unify the goals of neuromorphic computing and drive its technological progress. Please visit neurobench.ai for the latest updates on the benchmark tasks and metrics.
Neuromorphic hardware as a self-organizing computing system
Khacef, Lyes, Girau, Bernard, Rougier, Nicolas, Upegui, Andres, Miramond, Benoit
This paper presents the self-organized neuromorphic architecture named SOMA. The objective is to study neural-based self-organization in computing systems and to prove the feasibility of a self-organizing hardware structure. Considering that these properties emerge from large scale and fully connected neural maps, we will focus on the definition of a self-organizing hardware architecture based on digital spiking neurons that offer hardware efficiency. From a biological point of view, this corresponds to a combination of the so-called synaptic and structural plasticities. We intend to define computational models able to simultaneously self-organize at both computation and communication levels, and we want these models to be hardware-compliant, fault tolerant and scalable by means of a neuro-cellular structure.