Distance and Hop-wise Structures Encoding Enhanced Graph Attention Networks

Huang, Zhiguo, Chen, Xiaowei, Wang, Bojuan

arXiv.org Artificial Intelligence 

Many works have proven that existing neighbor-averaging Graph Neural Networks cannot efficiently catch structure information, such GNNs cannot even catch degree features in some cases. The reason is intuitive: as the neighbor-averaging GNNs can only combine neighbor's feature vectors for every node, if the neighbor's feature vectors contains no structure information, the hop-wise neighbor-averaging GNNs can only catch degree information at best([1];[2];[3]). So, as an intuitive idea, injecting structure information into feature vectors may improve the performance of GNNs. Numerous works have shown that injecting structure, distance, position or spatial information can significantly improve performance of neighbor-averaging GNNs([4];[5];[6];[7];[8];[9];[10]). However, existing works have their problems. Some of them has very high computation complexity which can not apply to large-scale graph(MotifNet[4]). Some of them simply concatenate structure information with intrinsic feature vector (ID-GNN[6]; P-GNN[8]; DE-GNN[9]), which may confuse the signals of different feature. For example, in ogbn-arxiv dataset, the intrinsic feature is semantic embedding of headline or abstract, which provides total different signal with structure information. Some of them are graph-level-task oriented and only deal with small graph(Graphormer[7]; SubGNN[10]).