voltage
NeuralFuse: Learning to Recover the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes Hao-Lun Sun
Energy-efficient computing is of primary importance to the effective deployment of deep neural networks (DNNs), particularly in edge devices and in on-chip AI systems. Increasing DNN computation's energy efficiency and lowering its carbon footprint require iterative efforts from both chip designers and algorithm developers.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology (1.00)
- Energy (1.00)
Scalable Bayesian inference of dendritic voltage via spatiotemporal recurrent state space models
Ruoxi Sun, Scott Linderman, Ian Kinsella, Liam Paninski
Recent progress in the development of voltage indicators [1-8] has brought us closer to a longstanding goal incellular neuroscience: imaging the full spatiotemporal voltageonadendritic tree. These recordings have the potential (pun not intended) to resolve fundamental questions about the computations performed by dendrites -- questions that have remained open for more than a century[9,10].
- Europe > Portugal > Lisbon > Lisbon (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > Canada (0.04)
- (3 more...)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
- Education (0.46)
- North America > Puerto Rico > San Juan > San Juan (0.04)
- Europe > Netherlands (0.04)
- Europe > Belgium > Wallonia > Walloon Brabant (0.04)
- (5 more...)
- Energy > Power Industry (1.00)
- Energy > Renewable > Solar (0.47)
Scalable Bayesian inference of dendritic voltage via spatiotemporal recurrent state space models
Recent advances in optical voltage sensors have brought us closer to a critical goal in cellular neuroscience: imaging the full spatiotemporal voltage on a dendritic tree. However, current sensors and imaging approaches still face significant limitations in SNR and sampling frequency; therefore statistical denoising and interpolation methods remain critical for understanding single-trial spatiotemporal dendritic voltage dynamics. Previous denoising approaches were either based on an inadequate linear voltage model or scaled poorly to large trees. Here we introduce a scalable fully Bayesian approach. We develop a generative nonlinear model that requires few parameters per compartment of the cell but is nonetheless flexible enough to sample realistic spatiotemporal data. The model captures different dynamics in each compartment and leverages biophysical knowledge to constrain intra-and inter-compartmental dynamics. We obtain a full posterior distribution over spatiotemporal voltage via an augmented Gibbs sampling algorithm. The nonlinear smoother model outperforms previously developed linear methods, and scales to much larger systems than previous methods based on sequential Monte Carlo approaches.
Random Feature Spiking Neural Networks
Gollwitzer, Maximilian, Dietrich, Felix
Spiking Neural Networks (SNNs) as Machine Learning (ML) models have recently received a lot of attention as a potentially more energy-efficient alternative to conventional Artificial Neural Networks. The non-differentiability and sparsity of the spiking mechanism can make these models very difficult to train with algorithms based on propagating gradients through the spiking non-linearity. We address this problem by adapting the paradigm of Random Feature Methods (RFMs) from Artificial Neural Networks (ANNs) to Spike Response Model (SRM) SNNs. This approach allows training of SNNs without approximation of the spike function gradient. Concretely, we propose a novel data-driven, fast, high-performance, and interpretable algorithm for end-to-end training of SNNs inspired by the SWIM algorithm for RFM-ANNs, which we coin S-SWIM. We provide a thorough theoretical discussion and supplementary numerical experiments showing that S-SWIM can reach high accuracies on time series forecasting as a standalone strategy and serve as an effective initialisation strategy before gradient-based training. Additional ablation studies show that our proposed method performs better than random sampling of network weights.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
GAVINA: flexible aggressive undervolting for bit-serial mixed-precision DNN acceleration
Fornt, Jordi, Fontova-Musté, Pau, Gras, Adrian, Lahyani, Omar, Caro, Martí, Abella, Jaume, Moll, Francesc, Altet, Josep
Voltage overscaling, or undervolting, is an enticing approximate technique in the context of energy-efficient Deep Neural Network (DNN) acceleration, given the quadratic relationship between power and voltage. Nevertheless, its very high error rate has thwarted its general adoption. Moreover, recent undervolting accelerators rely on 8-bit arithmetic and cannot compete with state-of-the-art low-precision (<8b) architectures. To overcome these issues, we propose a new technique called Guarded Aggressive underVolting (GAV), which combines the ideas of undervolting and bit-serial computation to create a flexible approximation method based on aggressively lowering the supply voltage on a select number of least significant bit combinations. Based on this idea, we implement GAVINA (GAV mIxed-precisioN Accelerator), a novel architecture that supports arbitrary mixed precision and flexible undervolting, with an energy efficiency of up to 89 TOP/sW in its most aggressive configuration. By developing an error model of GAVINA, we show that GAV can achieve an energy efficiency boost of 20% via undervolting, with negligible accuracy degradation on ResNet-18.
- North America > United States (0.04)
- Europe (0.04)
- Asia (0.04)
Physical Reinforcement Learning
Digital computers are power-hungry and largely intolerant of damaged components, making them potentially difficult tools for energy-limited autonomous agents in uncertain environments. Recently developed Contrastive Local Learning Networks (CLLNs) -- analog networks of self-adjusting nonlinear resistors -- are inherently low-power and robust to physical damage, but were constructed to perform supervised learning. In this work we demonstrate success on two simple RL problems using Q-learning adapted for simulated CLLNs. Doing so makes explicit the components (beyond the network being trained) required to enact various tools in the RL toolbox, some of which (policy function and value function) are more natural in this system than others (replay buffer). We discuss assumptions such as the physical safety that digital hardware requires, CLLNs can forgo, and biological systems cannot rely on, and highlight secondary goals that are important in biology and trainable in CLLNs, but make little sense in digital computers.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Pennsylvania (0.04)
- Asia > Singapore (0.04)
Analog Physical Systems Can Exhibit Double Descent
Dillavou, Sam, Rocks, Jason W, Wycoff, Jacob F, Liu, Andrea J, Durian, Douglas J
An important component of the success of large AI models is double descent, in which networks avoid overfitting as they grow relative to the amount of training data, instead improving their performance on unseen data. Here we demonstrate double descent in a decentralized analog network of self-adjusting resistive elements. This system trains itself and performs tasks without a digital processor, offering potential gains in energy efficiency and speed -- but must endure component non-idealities. We find that standard training fails to yield double descent, but a modified protocol that accommodates this inherent imperfection succeeds. Our findings show that analog physical systems, if appropriately trained, can exhibit behaviors underlying the success of digital AI. Further, they suggest that biological systems might similarly benefit from over-parameterization.
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.14)
- North America > United States > New Mexico > Santa Fe County > Santa Fe (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)