Efficient Learning of Linear Graph Neural Networks via Node Subsampling
–Neural Information Processing Systems
Graph Neural Networks (GNNs) are a powerful class of machine learning models with applications in recommender systems, drug discovery, social network analysis, and computer vision. One challenge with their implementation is that GNNs often take large-scale graphs as inputs, which imposes significant computational/storage costs in the training and testing phases. In particular, the message passing operations of a GNN require multiplication of the graph adjacency matrix A \in \mathbb{R} {n \times n} and the data matrix X \in \mathbb{R} {n \times d}, and the O(n 2 d) time complexity can be prohibitive for large n . Thus, a natural question is whether it is possible to perform the GNN operations in (quasi-)linear time by avoiding the full computation of A X . To study this question, we consider the setting of a regression task on a two-layer Linear Graph Convolutional Network (GCN). We develop an efficient training algorithm based on (1) performing node subsampling, (2) estimating the leverage scores of A X based on the subsampled graph, and (3) performing leverage score sampling on A X .
Neural Information Processing Systems
Jan-19-2025, 19:05:35 GMT
- Technology: