Sparse Implementation of Versatile Graph-Informed Layers

Della Santa, Francesco

arXiv.org Artificial Intelligence 

Graph Neural Networks (GNNs) are well known as powerful tools for learning tasks on graphstructured data [10], such as semi-supervised node classification, link prediction, and graph classification, with their origin that dates back to the late 2000s [4, 6, 7]. Recently, a new type of layer for GNNs called Graph-Informed (GI) layer [1] has been developed, specifically designed for regression tasks on graph-nodes; indeed, this type of task is not suitable for classic GNNs and, therefore, typically it is approached using MLPs, that do not exploit the graph structure of the data. Nonetheless, the usage of GI layers has been recently extended also to supervised classification tasks (see [3]). The main advantages of the GI layers is the possibility to build Neural Networks (NNs), called Graph-Informed NNs (GINNs), suitable for large graphs and deep architectures. Their good performances, especially if compared with respect to classic MLPs, are illustrated both in [1] (regression tasks) and [3] (classification task for discontinuity detection). However, at the time this work is written, existing GI layer implementations have one main limitation.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found