Harnessing Collective Structure Knowledge in Data Augmentation for Graph Neural Networks

Ma, Rongrong, Pang, Guansong, Chen, Ling

arXiv.org Artificial Intelligence 

In the past few years, Graph Neural Networks (GNNs) [14, 43] have been emerging as one of the most powerful and successful techniques for graph representation learning. Message passing neural networks constitute a prevalent category of GNN models, which learn node features and graph structure information through recursively aggregating current representations of node and its neighbors. Diverse aggregation strategies have been introduced, giving rise to various GNN backbones, such as GCN, GIN, and among others [14, 15, 16, 17, 18]. However, the expressive power of these message passing GNNs is upper bounded by 1-dimensional Weisfeiler-Leman (1-WL) tests [18, 19] that encode a node's color via recursively expanding the neighbors of the node to construct a rooted subtree for the node. As shown in Figure 1, such rooted subtrees are with limited expressiveness and might be the same for graphs with different structures, leading to failure in distinguishing these graphs. This presents a bottleneck for applying WL tests or message passing neural networks to many real-world graph application domains. The failure of WL test is mainly due to the rooted subtree's limited capabilities in capturing different substructures that can appear in the graph. Since the message passing scheme of GNNs mimics the 1-WL algorithm, one intuition to enhance the expressive power of GNNs is to enrich the passing information, es-2 Figure 1: 1-and 2-WL tests fail to distinguish the two graphs as they obtain the same rooted subtree (node coloring).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found