Flood and Echo: Algorithmic Alignment of GNNs with Distributed Computing
Mathys, Joël, Grötschla, Florian, Nadimpalli, Kalyan Varma, Wattenhofer, Roger
–arXiv.org Artificial Intelligence
Graph Neural Networks are a natural fit for learning algorithms. They can directly represent tasks through an abstract but versatile graph structure and handle inputs of different sizes. This opens up the possibility for scaling and extrapolation to larger graphs, one of the most important advantages of an algorithm. However, this raises two core questions i) How can we enable nodes to gather the required information in a given graph (information exchange), even if is far away and ii) How can we design an execution framework which enables this information exchange for extrapolation to larger graph sizes (algorithmic alignment for extrapolation). Through its sparse but parallel activations it is provably more efficient in terms of message complexity. We study the proposed model and provide both empirical evidence and theoretical insights in terms of its expressiveness, efficiency, information exchange and ability to extrapolate. We study the problem of algorithm learning using Graph Neural Networks. The concept of an algorithm is best understood as a sequence of instructions which can be applied to compute a desired output given the respective input. Algorithms have the advantage, that they work correctly for their entire domain. If we want to multiply two numbers, we can easily illustrate and explain the multiplication algorithm using small numbers. However, the same procedure generalizes, i.e. the algorithm can be used to extrapolate and multiply together much larger numbers using the same algorithmic steps. Algorithm learning aims to grasp these underlying algorithmic principles and incorporate them into machine learning architectures. Therefore, the ability to process different input sizes and extrapolate are at the core of our study. Graphs and as an extension GNNs naturally present themselves to study algorithms as many algorithmic problems can be represented as graphs. Moreover, they can inherently capture instances of different sizes, which allows us to study extrapolation. GNNs follow the message passing paradigm, which closely corresponds to computation models studied in distributed computing.
arXiv.org Artificial Intelligence
Oct-12-2023