Collaborating Authors

Unsupervised Inductive Whole-Graph Embedding by Preserving Graph Proximity Machine Learning

Recent years we have witnessed the great popularity of graph representation learning with success in not only node-level tasks such as node classification (Kipf & Welling, 2016a) and link prediction (Zhang & Chen, 2018) but also graph-level tasks such as graph classification (Ying et al., 2018) and graph similarity/distance computation (Bai et al., 2019). There has been a rich body of work (Belkin & Niyogi, 2003; Qiu et al., 2018) on node-level embeddings that turn each node in a graph into a vector preserving node-node proximity (similarity/distance). It is thus natural to raise the question: Can we embed an entire graph into a vector in an unsupervised way, and how? However, most existing methods for graph-level, i.e. whole-graph, embeddings assume a supervised model (Zhang & Chen, 2019), with only a few exceptions such as G KERNELS(Yanardag & Vishwanathan, 2015), which typically count subgraphs for a given graph and can be slow, and GRAPH2VEC(Narayanan et al., 2017), which is transductive. A key challenge facing designing an unsupervised graph-level embedding model is the lack of graphlevel signals in the training stage. Unlike node-level embedding which has a long history in utilizing the link structure of a graph to embed nodes, there lacks such natural proximity (similarity/distance) information between graphs. Supervised methods, therefore, typically resort to graph labels as guidance and use aggregation based methods, e.g. However, this assumption is problematic, as simple aggregation of node embeddings can only preserve limited graph-level properties, which is, however, often insufficient in measuring graphgraph proximity ("inter-graph" information). Inspired by the recent progress on graph proximity modeling (Ktena et al., 2017; Bai et al., 2019), we propose a novel framework, UG RAPHEMBthat employs multi-scale aggregations of node-level embeddings, guided by the graph-graph proximity defined by well-accepted and domain-agnostic graph proximity metrics such as Graph Edit Distance (GED) (Bunke, 1983), Maximum Common Subgraph (MCS) (Bunke & Shearer, 1998), etc. RAPHEMBis to learn high-quality graph-level representations in a completely unsupervised and inductive fashion: During training, it learns a function that maps a graph into a universal embedding space best preserving graph-graph proximity, so that after training, any new graph can be mapped to this embedding space by applying the learned function. Inspired by the recent success of pre-training methods in the text domain, such as ELMO (Peters et al., 2018), B RAPHEMBfirst computes the graph-graph proximity scores, (c) yielding a "hyper-level graph" where each node is a graph in the dataset, and each edge has a proximity score associated with it, representing its weight/strength. RAPHEMBthen trains a function that maps each graph into an embedding which preserves the proximity score.

PiNet: A Permutation Invariant Graph Neural Network for Graph Classification Machine Learning

We propose an end-to-end deep learning learning model for graph classification and representation learning that is invariant to permutation of the nodes of the input graphs. We address the challenge of learning a fixed size graph representation for graphs of varying dimensions through a differentiable node attention pooling mechanism. In addition to a theoretical proof of its invariance to permutation, we provide empirical evidence demonstrating the statistically significant gain in accuracy when faced with an isomorphic graph classification task given only a small number of training examples. We analyse the effect of four different matrices to facilitate the local message passing mechanism by which graph convolutions are performed vs. a matrix parametrised by a learned parameter pair able to transition smoothly between the former. Finally, we show that our model achieves competitive classification performance with existing techniques on a set of molecule datasets.

Coloring graph neural networks for node disambiguation Machine Learning

Learning good representations is seen by many machine learning researchers as the main reason behind the tremendous successes of the field in recent years (Bengio et al., 2013). In image analysis (Krizhevsky et al., 2012), natural language processing (V aswani et al., 2017) or reinforcement learning (Mnih et al., 2015), groundbreaking results rely on efficient and flexible deep learning Despite a large literature and state-of-the-art performance on benchmark graph classification datasets, graph neural networks yet lack a similar theoretical foundation (Xu et al., 2019). Defferrard et al., 2016; Kipf and Welling, 2017) that perform convolution on the Fourier domain of Recently, (Xu et al., 2019) showed that MPNNs were, at most, as expressive as the Weisfeiler-Lehman (WL) test for graph isomorphism (Weisfeiler and Lehman, 1968). Other recent approaches (Maron et al., 2019c) implies quadratic order of tensors in the size of In this section we present the theoretical tools used to design our universal graph representation. This assumption is rather weak (e.g. Figure 2: Universal representations can easily be created by combining a separable representation with an MLP .

Quaternion Graph Neural Networks Machine Learning

Recently, graph neural networks (GNNs) become a principal research direction to learn low-dimensional continuous embeddings of nodes and graphs to predict node and graph labels, respectively. However, Euclidean embeddings have high distortion when using GNNs to model complex graphs such as social networks. Furthermore, existing GNNs are not very efficient with the high number of model parameters when increasing the number of hidden layers. Therefore, we move beyond the Euclidean space to a hyper-complex vector space to improve graph representation quality and reduce the number of model parameters. To this end, we propose quaternion graph neural networks (QGNN) to generalize GCNs within the Quaternion space to learn quaternion embeddings for nodes and graphs. The Quaternion space, a hyper-complex vector space, provides highly meaningful computations through Hamilton product compared to the Euclidean and complex vector spaces. As a result, our QGNN can reduce the model size up to four times and enhance learning better graph representations. Experimental results show that the proposed QGNN produces state-of-the-art accuracies on a range of well-known benchmark datasets for three downstream tasks, including graph classification, semi-supervised node classification, and text (node) classification.

Gaussian Embedding of Large-scale Attributed Graphs Machine Learning

Graph embedding methods transform high-dimensional and complex graph contents into low-dimensional representations. They are useful for a wide range of graph analysis tasks including link prediction, node classification, recommendation and visualization. Most existing approaches represent graph nodes as point vectors in a low-dimensional embedding space, ignoring the uncertainty present in the real-world graphs. Furthermore, many real-world graphs are large-scale and rich in content (e.g. node attributes). In this work, we propose GLACE, a novel, scalable graph embedding method that preserves both graph structure and node attributes effectively and efficiently in an end-to-end manner. GLACE effectively models uncertainty through Gaussian embeddings, and supports inductive inference of new nodes based on their attributes. In our comprehensive experiments, we evaluate GLACE on real-world graphs, and the results demonstrate that GLACE significantly outperforms state-of-the-art embedding methods on multiple graph analysis tasks.