Strukov, Dmitri
Quantized Context Based LIF Neurons for Recurrent Spiking Neural Networks in 45nm
Bezugam, Sai Sukruth, Wu, Yihao, Yoo, JaeBum, Strukov, Dmitri, Kim, Bongjin
In this study, we propose the first hardware implementation of a context-based recurrent spiking neural network (RSNN) emphasizing on integrating dual information streams within the neocortical pyramidal neurons specifically Context- Dependent Leaky Integrate and Fire (CLIF) neuron models, essential element in RSNN. We present a quantized version of the CLIF neuron (qCLIF), developed through a hardware-software codesign approach utilizing the sparse activity of RSNN. Implemented in a 45nm technology node, the qCLIF is compact (900um^2) and achieves a high accuracy of 90% despite 8 bit quantization on DVS gesture classification dataset. Our analysis spans a network configuration from 10 to 200 qCLIF neurons, supporting up to 82k synapses within a 1.86 mm^2 footprint, demonstrating scalability and efficiency
Applications and Techniques for Fast Machine Learning in Science
Deiana, Allison McCarn, Tran, Nhan, Agar, Joshua, Blott, Michaela, Di Guglielmo, Giuseppe, Duarte, Javier, Harris, Philip, Hauck, Scott, Liu, Mia, Neubauer, Mark S., Ngadiuba, Jennifer, Ogrenci-Memik, Seda, Pierini, Maurizio, Aarrestad, Thea, Bahr, Steffen, Becker, Jurgen, Berthold, Anne-Sophie, Bonventre, Richard J., Bravo, Tomas E. Muller, Diefenthaler, Markus, Dong, Zhen, Fritzsche, Nick, Gholami, Amir, Govorkova, Ekaterina, Hazelwood, Kyle J, Herwig, Christian, Khan, Babar, Kim, Sehoon, Klijnsma, Thomas, Liu, Yaling, Lo, Kin Ho, Nguyen, Tri, Pezzullo, Gianantonio, Rasoulinezhad, Seyedramin, Rivera, Ryan A., Scholberg, Kate, Selig, Justin, Sen, Sougata, Strukov, Dmitri, Tang, William, Thais, Savannah, Unger, Kai Lukas, Vilalta, Ricardo, Krosigk, Belinavon, Warburton, Thomas K., Flechas, Maria Acosta, Aportela, Anthony, Calvet, Thomas, Cristella, Leonardo, Diaz, Daniel, Doglioni, Caterina, Galati, Maria Domenica, Khoda, Elham E, Fahim, Farah, Giri, Davide, Hawks, Benjamin, Hoang, Duc, Holzman, Burt, Hsu, Shih-Chieh, Jindariani, Sergo, Johnson, Iris, Kansal, Raghav, Kastner, Ryan, Katsavounidis, Erik, Krupa, Jeffrey, Li, Pan, Madireddy, Sandeep, Marx, Ethan, McCormack, Patrick, Meza, Andres, Mitrevski, Jovan, Mohammed, Mohammed Attia, Mokhtar, Farouk, Moreno, Eric, Nagu, Srishti, Narayan, Rohin, Palladino, Noah, Que, Zhiqiang, Park, Sang Eon, Ramamoorthy, Subramanian, Rankin, Dylan, Rothman, Simon, Sharma, Ashish, Summers, Sioni, Vischia, Pietro, Vlimant, Jean-Roch, Weng, Olivia
In this community review report, we discuss applications and techniques for fast machine learning (ML) in science -- the concept of integrating power ML methods into the real-time experimental data processing loop to accelerate scientific discovery. The material for the report builds on two workshops held by the Fast ML for Science community and covers three main areas: applications for fast ML across a number of scientific domains; techniques for training and implementing performant and resource-efficient ML algorithms; and computing architectures, platforms, and technologies for deploying these algorithms. We also present overlapping challenges across the multiple scientific domains where common solutions can be found. This community report is intended to give plenty of examples and inspiration for scientific discovery through integrated and accelerated ML solutions. This is followed by a high-level overview and organization of technical advances, including an abundance of pointers to source material, which can enable these breakthroughs.