Goto

Collaborating Authors

 Pei, Jing


Understanding the Functional Roles of Modelling Components in Spiking Neural Networks

arXiv.org Artificial Intelligence

Spiking neural networks (SNNs), inspired by the neural circuits of the brain, are promising in achieving high computational efficiency with biological fidelity. Nevertheless, it is quite difficult to optimize SNNs because the functional roles of their modelling components remain unclear. By designing and evaluating several variants of the classic model, we systematically investigate the functional roles of key modelling components, leakage, reset, and recurrence, in leaky integrate-and-fire (LIF) based SNNs. Through extensive experiments, we demonstrate how these components influence the accuracy, generalization, and robustness of SNNs. Specifically, we find that the leakage plays a crucial role in balancing memory retention and robustness, the reset mechanism is essential for uninterrupted temporal processing and computational efficiency, and the recurrence enriches the capability to model complex dynamics at a cost of robustness degradation. With these interesting observations, we provide optimization suggestions for enhancing the performance of SNNs in different scenarios. This work deepens the understanding of how SNNs work, which offers valuable guidance for the development of more effective and robust neuromorphic models.


Enhancing Graph Representation Learning with Attention-Driven Spiking Neural Networks

arXiv.org Artificial Intelligence

Graph representation learning has become a crucial task in machine learning and data mining due to its potential for modeling complex structures such as social networks, chemical compounds, and biological systems. Spiking neural networks (SNNs) have recently emerged as a promising alternative to traditional neural networks for graph learning tasks, benefiting from their ability to efficiently encode and process temporal and spatial information. In this paper, we propose a novel approach that integrates attention mechanisms with SNNs to improve graph representation learning. Specifically, we introduce an attention mechanism for SNN that can selectively focus on important nodes and corresponding features in a graph during the learning process. We evaluate our proposed method on several benchmark datasets and show that it achieves comparable performance compared to existing graph learning techniques.


Brain-inspired global-local hybrid learning towards human-like intelligence

arXiv.org Artificial Intelligence

Two main routes of learning methods exist at present including neuroscience-inspired methods and machine learning methods. Both have own advantages, but neither currently can solve all learning problems well. Integrating them into one network may provide better learning abilities for general tasks. On the other hand, spiking neural network embodies "computation" in spatiotemporal domain with unique features of rich coding scheme and threshold switching, which is very suitable for low power and high parallel neuromorphic computing. Here, we report a spike-based general learning model that integrates two learning routes by introducing a brain-inspired meta-local module and a two-phase parametric modelling. The hybrid model can meta-learn general local plasticity, and receive top-down supervision information for multi-scale learning. We demonstrate that this hybrid model facilitates learning of many general tasks, including fault-tolerance learning, few-shot learning and multiple-task learning. Furthermore, the implementation of the hybrid model on the Tianjic neuromorphic platform proves that it can fully utilize the advantages of neuromorphic hardware architecture and promote energy-efficient on-chip applications.


Gated XNOR Networks: Deep Neural Networks with Ternary Weights and Activations under a Unified Discretization Framework

arXiv.org Machine Learning

There is a pressing need to build an architecture that could subsume these networks undera unified framework that achieves both higher performance and less overhead. To this end, two fundamental issues are yet to be addressed. The first one is how to implement the back propagation when neuronal activations are discrete. The second one is how to remove the full-precision hidden weights in the training phase to break the bottlenecks of memory/computation consumption. To address the first issue, we present a multistep neuronal activation discretization method and a derivative approximation technique that enable the implementing the back propagation algorithm on discrete DNNs. While for the second issue, we propose a discrete state transition (DST) methodology to constrain the weights in a discrete space without saving the hidden weights. In this way, we build a unified framework that subsumes the binary or ternary networks as its special cases.More particularly, we find that when both the weights and activations become ternary values, the DNNs can be reduced to gated XNOR networks (or sparse binary networks) since only the event of non-zero weight and non-zero activation enables the control gate to start the XNOR logic operations in the original binary networks. This promises the event-driven hardware design for efficient mobile intelligence. We achieve advanced performance compared with state-of-the-art algorithms. Furthermore,the computational sparsity and the number of states in the discrete space can be flexibly modified to make it suitable for various hardware platforms.