GT-SNT: A Linear-Time Transformer for Large-Scale Graphs via Spiking Node Tokenization
Zhang, Huizhe, Li, Jintang, Zhu, Yuchang, Zhong, Huazhen, Chen, Liang
–arXiv.org Artificial Intelligence
Graph Transformers (GTs), which integrate message passing and self-attention mechanisms simultaneously, have achieved promising empirical results in graph prediction tasks. However, the design of scalable and topology-aware node tok-enization has lagged behind other modalities. This gap becomes critical as the quadratic complexity of full attention renders them impractical on large-scale graphs. Recently, Spiking Neural Networks (SNNs), as brain-inspired models, provided an energy-saving scheme to convert input intensity into discrete spike-based representations through event-driven spiking neurons. Inspired by these characteristics, we propose a linear-time Graph Transformer with Spiking Node Tokenization (GT -SNT) for node classification. By integrating multi-step feature propagation with SNNs, spiking node tokenization generates compact, locality-aware spike count embeddings as node tokens to avoid predefined code-books and their utilization issues. The codebook guided self-attention leverages these tokens to perform node-to-token attention for linear-time global context aggregation. In experiments, we compare GT -SNT with other state-of-the-art baselines on node classification datasets ranging from small to large. Experimental results show that GT -SNT achieves comparable performances on most datasets and reaches up to 130 faster inference speed compared to other GTs.
arXiv.org Artificial Intelligence
Dec-12-2025
- Country:
- Asia > China
- Fujian Province > Xiamen (0.04)
- Guangdong Province > Shenzhen (0.04)
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- Asia > China
- Genre:
- Research Report > New Finding (1.00)
- Technology: