Scalable Graph Transformers for Million Nodes

#artificialintelligence 

Recently, building Transformer models for handling graph-structured data has aroused wide interests in the machine learning research community. One critical challenge stems from the quadratic complexity of global attention that hinders Transformers for scaling to large graphs. This work proposes a scalable graph Transformers for large node classification graphs where the node numbers could vary from thousands to millions (or even more). The key module is a kernelized Gumbel-Softmax-based message passing that achieves all-pair feature propagation within O(N) complexity (N for #nodes). The following content will summarize the main idea and results of this work.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found