Collaborating Authors

Towards Interpretable Sparse Graph Representation Learning with Laplacian Pooling Machine Learning

Recent work in graph neural networks (GNNs) has lead to improvements in molecular activity and property prediction tasks. However, GNNs lack interpretability as they fail to capture the relative importance of various molecular substructures due to the absence of efficient intermediate pooling steps for sparse graphs. To address this issue, we propose LaPool (Laplacian Pooling), a novel, data-driven, and interpretable graph pooling method that takes into account the node features and graph structure to improve molecular understanding. Inspired by theories in graph signal processing, LaPool performs a feature-driven hierarchical segmentation of molecules by selecting a set of centroid nodes from a graph as cluster representatives. It then learns a sparse assignment of remaining nodes into these clusters using an attention mechanism. We benchmark our model by showing that it outperforms recent graph pooling layers on molecular graph understanding and prediction tasks. We then demonstrate improved interpretability by identifying important molecular substructures and generating novel and valid molecules, with important applications in drug discovery and pharmacology.

Geometric Graph Convolutional Neural Networks Machine Learning

Graph Convolutional Networks (GCNs) have recently become the primary choice for learning from graph-structured data, superseding hash fingerprints in representing chemical compounds. However, GCNs lack the ability to take into account the ordering of node neighbors, even when there is a geometric interpretation of the graph vertices that provides an order based on their spatial positions. To remedy this issue, we propose Geometric Graph Convolutional Network (geo-GCN) which uses spatial features to efficiently learn from graphs that can be naturally located in space. Our contribution is threefold: we propose a GCN-inspired architecture which (i) leverages node positions, (ii) is a proper generalisation of both GCNs and Convolutional Neural Networks (CNNs), (iii) benefits from augmentation which further improves the performance and assures invariance with respect to the desired properties. Empirically, geo-GCN outperforms state-of-the-art graph-based methods on image classification and chemical tasks. Introduction Convolutional Neural Networks (CNNs) outperform humans on visual learning tasks, such as image classification (Krizhevsky, Sutskever, and Hinton 2012), object detection (Seferbekov et al. 2018) or image captioning (Y ang et al. 2017). They have also been successfully applied to text processing (Kim 2014) and time series analysis (Y ang et al. 2015). Nevertheless, CNNs cannot be easily adapted to irregular entities, such as graphs, where data representation is not organised in a grid-like structure. Graph Convolutional Networks (GCNs) attempt to mimic CNNs by operating on spatially close neighbors. Motivated by spectral graph theory, Kipf and Welling (Kipf and Welling 2016) use fixed weights determined by the adjacency matrix of a graph to aggregate labels of the neighbors.

Function Space Pooling For Graph Convolutional Networks Machine Learning

Many real world data such as social networks, collections of documents and chemical structures are naturally represented as graphs. Consequently there exists great potential for the application of machine learning to graphs. Given the great successes of neural networks or deep learning to the analysis of images, there has recently been much research considering the application or generalization of neural networks to graphs. In many cases this has resulted in state of the art performance in many tasks (Wu et al., 2019). Graph convolutional is a neural network architecture commonly applied to graphs. This architecture consists of a sequence of convolutional layers where each layer iteratively updates a representation or embedding of each vertex. This update is achieved through the application of an operation which considers the current representation of each vertex plus the current representation of its adjacent neighbours (Gilmer et al., 2017). The output of a sequence of convolutional layers is a representation of each vertex which encodes properties of the vertex in question and vertices in its neighbourhood. If one wishes to perform a vertex centric task such as vertex classification, then one may operate directly on the set of vertex representations output from a sequence of convolutional layers.

Graph Convolutional Networks with EigenPooling Machine Learning

Graph neural networks, which generalize deep neural network models to graph structured data, have attracted increasing attention in recent years. They usually learn node representations by transforming, propagating and aggregating node features and have been proven to improve the performance of many graph related tasks such as node classification and link prediction. To apply graph neural networks for the graph classification task, approaches to generate the \textit{graph representation} from node representations are demanded. A common way is to globally combine the node representations. However, rich structural information is overlooked. Thus a hierarchical pooling procedure is desired to preserve the graph structure during the graph representation learning. There are some recent works on hierarchically learning graph representation analogous to the pooling step in conventional convolutional neural (CNN) networks. However, the local structural information is still largely neglected during the pooling process. In this paper, we introduce a pooling operator $\pooling$ based on graph Fourier transform, which can utilize the node features and local structures during the pooling process. We then design pooling layers based on the pooling operator, which are further combined with traditional GCN convolutional layers to form a graph neural network framework $\m$ for graph classification. Theoretical analysis is provided to understand $\pooling$ from both local and global perspectives. Experimental results of the graph classification task on $6$ commonly used benchmarks demonstrate the effectiveness of the proposed framework.

Attacking Graph Convolutional Networks via Rewiring Machine Learning

Graph Neural Networks (GNNs) have boosted the performance of many graph related tasks such as node classification and graph classification. Recent researches show that graph neural networks are vulnerable to adversarial attacks, which deliberately add carefully created unnoticeable perturbation to the graph structure. The perturbation is usually created by adding/deleting a few edges, which might be noticeable even when the number of edges modified is small. In this paper, we propose a graph rewiring operation which affects the graph in a less noticeable way compared to adding/deleting edges. We then use reinforcement learning to learn the attack strategy based on the proposed rewiring operation. Experiments on real world graphs demonstrate the effectiveness of the proposed framework. To understand the proposed framework, we further analyze how its generated perturbation to the graph structure affects the output of the target model.