Contextual Graph Markov Model: A Deep and Generative Approach to Graph Processing Artificial Intelligence

We introduce the Contextual Graph Markov Model, an approach combining ideas from generative models and neural networks for the processing of graph data. It founds on a constructive methodology to build a deep architecture comprising layers of probabilistic models that learn to encode the structured information in an incremental fashion. Context is diffused in an efficient and scalable way across the graph vertexes and edges. The resulting graph encoding is used in combination with discriminative models to address structure classification benchmarks.

Tree Edit Distance Learning via Adaptive Symbol Embeddings Machine Learning

Metric learning has the aim to improve classification accuracy by learning a distance measure which brings data points from the same class closer together and pushes data points from different classes further apart. Recent research has demonstrated that metric learning approaches can also be applied to trees, such as molecular structures, abstract syntax trees of computer programs, or syntax trees of natural language, by learning the cost function of an edit distance, i.e. the costs of replacing, deleting, or inserting nodes in a tree. However, learning such costs directly may yield an edit distance which violates metric axioms, is challenging to interpret, and may not generalize well. In this contribution, we propose a novel metric learning approach for trees which learns an edit distance indirectly by embedding the tree nodes as vectors, such that the Euclidean distance between those vectors supports class discrimination. We learn such embeddings by reducing the distance to prototypical trees from the same class and increasing the distance to prototypical trees from different classes. In our experiments, we show that our proposed metric learning approach improves upon the state-of-the-art in metric learning for trees on six benchmark data sets, ranging from computer science over biomedical data to a natural-language processing data set containing over 300,000 nodes.

Pre-training Graph Neural Networks with Kernels Machine Learning

Many machine learning techniques have been proposed in the last few years to process data represented in graph-structured form. Graphs can be used to model several scenarios, from molecules and materials to RNA secondary structures. Several kernel functions have been defined on graphs that coupled with kernelized learning algorithms, have shown state-of-the-art performances on many tasks. Recently, several definitions of Neural Networks for Graph (GNNs) have been proposed, but their accuracy is not yet satisfying. In this paper, we propose a task-independent pre-training methodology that allows a GNN to learn the representation induced by state-of-the-art graph kernels. Then, the supervised learning phase will fine-tune this representation for the task at hand. The proposed technique is agnostic on the adopted GNN architecture and kernel function, and shows consistent improvements in the predictive performance of GNNs in our preliminary experimental results.

On Filter Size in Graph Convolutional Networks Machine Learning

Recently, many researchers have been focusing on the definition of neural networks for graphs. The basic component for many of these approaches remains the graph convolution idea proposed almost a decade ago. In this paper, we extend this basic component, following an intuition derived from the well-known convolutional filters over multi-dimensional tensors. In particular, we derive a simple, efficient and effective way to introduce a hyper-parameter on graph convolutions that influences the filter size, i.e. its receptive field over the considered graph. We show with experimental results on real-world graph datasets that the proposed graph convolutional filter improves the predictive performance of Deep Graph Convolutional Networks.

Text Summarization as Tree Transduction by Top-Down TreeLSTM Machine Learning

Extractive compression is a challenging natural language processing problem. This work contributes by formulating neural extractive compression as a parse tree transduction problem, rather than a sequence transduction task. Motivated by this, we introduce a deep neural model for learning structure-to-substructure tree transductions by extending the standard Long Short-Term Memory, considering the parent-child relationships in the structural recursion. The proposed model can achieve state of the art performance on sentence compression benchmarks, both in terms of accuracy and compression rate.