Learning on Random Balls is Sufficient for Estimating (Some) Graph Parameters
Theoretical analyses for graph learning methods often assume a complete observation of the input graph. Such an assumption might not be useful for handling any-size graphs due to the scalability issues in practice. In this work, we develop a theoretical framework for graph classification problems in the partial observation setting (i.e., subgraph samplings). Equipped with insights from graph limit theory, we propose a new graph classification model that works on a randomly sampled subgraph and a novel topology to characterize the representability of the model. Our theoretical framework contributes a theoretical validation of mini-batch learning on graphs and leads to new learning-theoretic results on generalization bounds as well as size-generalizability without assumptions on the input.
Nov-5-2021
- Country:
- Europe (1.00)
- North America > United States
- Hawaii (0.14)
- Genre:
- Research Report (0.83)
- Industry:
- Health & Medicine (0.46)
- Technology: