Towards Understanding the Effect of Leak in Spiking Neural Networks

Chowdhury, Sayeed Shafayet, Lee, Chankyu, Roy, Kaushik

arXiv.org Machine Learning 

Over the past few years, the advancements of deep artificial neural networks (ANNs) have led to remarkable success in various cognitive tasks (e.g., vision, language and behavior). In some cases, neural networks have outperformed the conventional algorithms and achieved human-level performance [1, 2]. However, recent ANNs are becoming extremely compute-intensive and often do not generalize well to previously unseen data during training. On the other hand, human brain can reliably learn and compute intricate cognitive tasks with only a few watts of power budget. Recently, Spiking Neural Networks (SNNs) have been explored toward realizing robust and energy-efficient machine intelligence guided by the cues from neuroscience experiments [3]. SNNs are categorized as the new generation neural networks [4] based on their neuronal functionalities. A variety of spiking neuron models largely resemble biological neuronal mechanisms, which transmit information through discrete spatiotemporal events (or spikes). These spiking neuron models can be characterized by their internal state called the membrane potential. A spiking neuron integrates the inputs over time and fires a spike-output whenever the membrane potential exceeds a threshold.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found