Demystifying Oversmoothing in Attention-Based Graph Neural Networks

Neural Information Processing Systems 

The latter can be viewed as attention-based GNNs on complete graphs. In this paper, we provide a definitive answer to this question -- attention-based GNNs also lose expressive power exponentially, albeit potentially at a slower exponential rate compared to GCNs.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found