Goto

Collaborating Authors

 information maximization loss


IM-Loss: Information Maximization Loss for Spiking Neural Networks

Neural Information Processing Systems

Spiking Neural Network (SNN), recognized as a type of biologically plausible architecture, has recently drawn much research attention.


IM-Loss: Information Maximization Loss for Spiking Neural Networks Y ufei Guo, Y uanpei Chen

Neural Information Processing Systems

I ( U; O) = H (O) (10) A.2 Algorithm The proposed training algorithm of an SNN is presented in Algo.1.Algorithm 1 The proposed training algorithm of an SNN.Input: Initialized SNN; training dataset; total training epochs, I; training iterations per epoch, J .


IM-Loss: Information Maximization Loss for Spiking Neural Networks

Neural Information Processing Systems

Spiking Neural Network (SNN), recognized as a type of biologically plausible architecture, has recently drawn much research attention. This bio-mimetic mechanism of SNN demonstrates extreme energy efficiency since it avoids any multiplications on neuromorphic hardware. However, the forward-passing 0/1 spike quantization will cause information loss and accuracy degradation. To deal with this problem, the Information maximization loss (IM-Loss) that aims at maximizing the information flow in the SNN is proposed in the paper. The IM-Loss not only enhances the information expressiveness of an SNN directly but also plays a part of the role of normalization without introducing any additional operations (\textit{e.g.}, bias and scaling) in the inference phase.


IM-Loss: Information Maximization Loss for Spiking Neural Networks ∗, Yuanpei Chen ∗, Liwen Zhang, Xiaode Liu, Yinglei Wang, Xuhui Huang, Zhe Ma

Neural Information Processing Systems

The conditional entropy H(O|U) can be expressed as the below equation according to the Eq.5 and Eq.7. A.2 Algorithm The proposed training algorithm of an SNN is presented in Algo.1. Algorithm 1 The proposed training algorithm of an SNN. Input: Initialized SNN; training dataset; total training epochs, I; training iterations per epoch, J. Output: The trained SNN.