Goto

Collaborating Authors

What is Geometric Deep Learning?

#artificialintelligence

There are various types of graphs, each with a set of rules, properties, and possible actions. Graph theory is the study of graphs and what we can learn from them. This will be covered in the next part of this series. For a concrete example of how Graph Learning can greatly improve existing machine learning tasks we can take a look at the computational sciences. One of the bottlenecks in computational chemistry, biology, and physics is the representation concepts, entities, and interactions.


Beyond Message Passing: a Physics-Inspired Paradigm for Graph Neural Networks

#artificialintelligence

The message-passing paradigm has been the "battle horse" of deep learning on graphs for several years, making graph neural networks a big success in a wide range of applications, from particle physics to protein design. From a theoretical viewpoint, it established the link to the Weisfeiler-Lehman hierarchy, allowing to analyse the expressive power of GNNs. We argue that the "node and edge-centric" mindset of current graph deep learning schemes imposes strong limitations that hinder future progress in the field. As an alternative, we propose physics-inspired "continuous" learning models that open up a new trove of tools from the fields of differential geometry, algebraic topology, and differential equations so far largely unexplored in graph ML. Graphs are a convenient way to abstract complex systems of relations and interactions. The increasing prominence of graph-structured data from social networks to high-energy physics to chemistry, and a series of high-impact successes have made deep learning on graphs one of the hottest topics in machine learning research [1]. Graph Neural Networks (GNNs) are by far the most common among graph ML methods and the most popular neural network architectures overall [2]. Graph neural networks take as input a graph endowed with node and edge features and compute a function that depends both on the features and the graph structure. Message-passing type GNNs, also called Message Passing Neural Networks (MPNN) [3], propagate node features by exchanging information between adjacent nodes. A typical MPNN architecture has several propagation layers, where each node is updated based on the aggregation of its neighbours' features.


Geometric deep learning -- Convolutional Neural Networks on Graphs and Manifolds

#artificialintelligence

The main idea of spectral approaches such as Graph neural networks is to generalize the Fourier transform theorem for graph and manifold data and doing the convolution on the spectral domain. The generalization of the Fourier transform consists on using the already defined eigenfunctions of graph laplacian as bases for the Fourier transform. The process to apply a convolution using this generalization is as follows. This approach has presented very good results on data presented as a graph, but has an important weakness: Laplacian eigenfunctions are inconsistent across different domains.


Geometric Deep Learning Advances Data Science

Communications of the ACM

Researchers are pushing beyond the limitations of convolutional neural networks using geometric deep learning techniques.


Adversarial Autoencoders with Constant-Curvature Latent Manifolds

arXiv.org Machine Learning

Constant-curvature Riemannian manifolds (CCMs) have been shown to be ideal embedding spaces in many application domains, as their non-Euclidean geometry can naturally account for some relevant properties of data, like hierarchy and circularity. In this work, we introduce the CCM adversarial autoencoder (CCM-AAE), a probabilistic generative model trained to represent a data distribution on a CCM. Our method works by matching the aggregated posterior of the CCM-AAE with a probability distribution defined on a CCM, so that the encoder implicitly learns to represent the data on the CCM in order to fool a discriminator network. The geometrical constraint is also explicitly imposed by jointly training the CCM-AAE to maximise the membership degree of the embeddings to the CCM. While several works in recent literature make use of either hyperspherical or hyperbolic manifolds for different learning tasks, ours is the first unified framework to seamlessly deal with CCMs of different curvatures. We show the effectiveness of our model on three different datasets characterised by non-trivial geometry: semi-supervised classification on MNIST, link prediction on two popular citation datasets, and graph-based molecule generation using the QM9 chemical database. Results show that our model compares favourably to other autoencoders based on Euclidean and non-Euclidean geometries on all tasks taken into account.