Learning Parametrised Graph Shift Operators
Dasoulas, George, Lutzeyer, Johannes, Vazirgiannis, Michalis
In many domains data is currently represented as graphs and therefore, the graph representation of this data becomes increasingly important in machine learning. Network data is, implicitly or explicitly, always represented using a graph shift operator (GSO) with the most common choices being the adjacency, Laplacian matrices and their normalisations. In this paper, a novel parametrised GSO (PGSO) is proposed, where specific parameter values result in the most commonly used GSOs and message-passing operators in graph neural network (GNN) frameworks. The PGSO is suggested as a replacement of the standard GSOs that are used in state-of-the-art GNN architectures and the optimisation of the PGSO parameters is seamlessly included in the model training. It is proved that the PGSO has real eigenvalues and a set of real eigenvectors independent of the parameter values and spectral bounds on the PGSO are derived. PGSO parameters are shown to adapt to the sparsity of the graph structure in a study on stochastic blockmodel networks, where they are found to automatically replicate the GSO regularisation found in the literature. On several real-world datasets the accuracy of state-of-theart GNN architectures is improved by the inclusion of the PGSO in both nodeand graph-classification tasks. Graph representation learning has attracted a significant research interest over the last years, mainly due to the structural complexity of real-world data and applications (Hamilton et al., 2017b; Wu et al., 2020). The topology of the observations plays a central role when performing machine learning tasks on graph structured data.
Jan-25-2021
- Country:
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Information Technology (0.34)
- Telecommunications (0.34)
- Technology: