$p$-Laplacian Based Graph Neural Networks
Fu, Guoji, Zhao, Peilin, Bian, Yatao
Graph neural networks (GNNs) have demonstrated superior performance for semisupervised node classification on graphs, as a result of their ability to exploit node features and topological information simultaneously. However, most GNNs implicitly assume that the labels of nodes and their neighbors in a graph are the same or consistent, which does not hold in heterophilic graphs, where the labels of linked nodes are likely to differ. Hence, when the topology is non-informative for label prediction, ordinary GNNs may work significantly worse than simply applying multi-layer perceptrons (MLPs) on each node. GNN, whose message passing mechanism is derived from a discrete regularization framework and could be theoretically explained as an approximation of a polynomial graph filter defined on the spectral domain of p-Laplacians. GNNs significantly outperform several state-of-the-art GNN architectures on heterophilic benchmarks while achieving competitive performance on homophilic benchmarks. GNNs can adaptively learn aggregation weights and are robust to noisy edges. In this paper, we explore the usage of graph neural networks (GNNs) for semi-supervised node classification on graphs, especially when the graphs admit strong heterophily or noisy edges. Semisupervised learning problems on graphs are ubiquitous in a lot of real-world scenarios, such as user classification in social media (Kipf & Welling, 2017), protein classification in biology (Velickovic et al., 2018), molecular property prediction in chemistry (Duvenaud et al., 2015), and many others (Marcheggiani & Titov, 2017; Satorras & Estrach, 2018). Recently, GNNs are becoming the de facto choice for processing graph structured data.
Nov-14-2021