Coloring graph neural networks for node disambiguation

Dasoulas, George, Santos, Ludovic Dos, Scaman, Kevin, Virmaux, Aladin

arXiv.org Machine Learning 

Learning good representations is seen by many machine learning researchers as the main reason behind the tremendous successes of the field in recent years (Bengio et al., 2013). In image analysis (Krizhevsky et al., 2012), natural language processing (V aswani et al., 2017) or reinforcement learning (Mnih et al., 2015), groundbreaking results rely on efficient and flexible deep learning Despite a large literature and state-of-the-art performance on benchmark graph classification datasets, graph neural networks yet lack a similar theoretical foundation (Xu et al., 2019). Defferrard et al., 2016; Kipf and Welling, 2017) that perform convolution on the Fourier domain of Recently, (Xu et al., 2019) showed that MPNNs were, at most, as expressive as the Weisfeiler-Lehman (WL) test for graph isomorphism (Weisfeiler and Lehman, 1968). Other recent approaches (Maron et al., 2019c) implies quadratic order of tensors in the size of In this section we present the theoretical tools used to design our universal graph representation. This assumption is rather weak (e.g. Figure 2: Universal representations can easily be created by combining a separable representation with an MLP .

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found