Goto

Collaborating Authors

 Bacciu, Davide


Accelerating the identification of informative reduced representations of proteins with deep learning for graphs

arXiv.org Machine Learning

The limits of molecular dynamics (MD) simulations of macromolecules are steadily pushed forward by the relentless developments of computer architectures and algorithms. This explosion in the number and extent (in size and time) of MD trajectories induces the need of automated and transferable methods to rationalise the raw data and make quantitative sense out of them. Recently, an algorithmic approach was developed by some of us to identify the subset of a protein's atoms, or mapping, that enables the most informative description of it. This method relies on the computation, for a given reduced representation, of the associated mapping entropy, that is, a measure of the information loss due to the simplification. Albeit relatively straightforward, this calculation can be time consuming. Here, we describe the implementation of a deep learning approach aimed at accelerating the calculation of the mapping entropy. The method relies on deep graph networks, which provide extreme flexibility in the input format. We show that deep graph networks are accurate and remarkably efficient, with a speedup factor as large as $10^5$ with respect to the algorithmic computation of the mapping entropy. Applications of this method, which entails a great potential in the study of biomolecules when used to reconstruct its mapping entropy landscape, reach much farther than this, being the scheme easily transferable to the computation of arbitrary functions of a molecule's structure.


Incremental Training of a Recurrent Neural Network Exploiting a Multi-Scale Dynamic Memory

arXiv.org Machine Learning

The effectiveness of recurrent neural networks can be largely influenced by their ability to store into their dynamical memory information extracted from input sequences at different frequencies and timescales. Such a feature can be introduced into a neural architecture by an appropriate modularization of the dynamic memory. In this paper we propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning. First, we show how to extend the architecture of a simple RNN by separating its hidden state into different modules, each subsampling the network hidden activations at different frequencies. Then, we discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies. Each new module works at a slower frequency than the previous ones and it is initialized to encode the subsampled sequence of hidden activations. Experimental results on synthetic and real-world datasets on speech recognition and handwritten characters show that the modular architecture and the incremental training algorithm improve the ability of recurrent neural networks to capture long-term dependencies.


Generalising Recursive Neural Models by Tensor Decomposition

arXiv.org Machine Learning

Most machine learning models for structured data encode the structural knowledge of a node by leveraging simple aggregation functions (in neural models, typically a weighted sum) of the information in the node's neighbourhood. Nevertheless, the choice of simple context aggregation functions, such as the sum, can be widely sub-optimal. In this work we introduce a general approach to model aggregation of structural context leveraging a tensor-based formulation. We show how the exponential growth in the size of the parameter space can be controlled through an approximation based on the Tucker tensor decomposition. This approximation allows limiting the parameters space size, decoupling it from its strict relation with the size of the hidden encoding space. By this means, we can effectively regulate the trade-off between expressivity of the encoding, controlled by the hidden size, computational complexity and model generalisation, influenced by parameterisation. Finally, we introduce a new Tensorial Tree-LSTM derived as an instance of our framework and we use it to experimentally assess our working hypotheses on tree classification scenarios.


A Deep Generative Model for Fragment-Based Molecule Generation

arXiv.org Machine Learning

Molecule generation is a challenging open problem in cheminformatics. Currently, deep generative approaches addressing the challenge belong to two broad categories, differing in how molecules are represented. One approach encodes molecular graphs as strings of text, and learns their corresponding character-based language model. Another, more expressive, approach operates directly on the molecular graph. In this work, we address two limitations of the former: generation of invalid and duplicate molecules. To improve validity rates, we develop a language model for small molecular substructures called fragments, loosely inspired by the well-known paradigm of Fragment-Based Drug Design. In other words, we generate molecules fragment by fragment, instead of atom by atom. To improve uniqueness rates, we present a frequency-based masking strategy that helps generate molecules with infrequent fragments. We show experimentally that our model largely outperforms other language model-based competitors, reaching state-of-the-art performances typical of graph-based approaches. Moreover, generated molecules display molecular properties similar to those in the training sample, even in absence of explicit task-specific supervision.


Tensor Decompositions in Deep Learning

arXiv.org Machine Learning

The paper surveys the topic of tensor decompositions in modern machine learning applications. It focuses on three active research topics of significant relevance for the community. After a brief review of consolidated works on multi-way data analysis, we consider the use of tensor decompositions in compressing the parameter space of deep learning models. Lastly, we discuss how tensor methods can be leveraged to yield richer adaptive representations of complex data, including structured information. The paper concludes with a discussion on interesting open research challenges.


A Fair Comparison of Graph Neural Networks for Graph Classification

arXiv.org Machine Learning

Experimental reproducibility and replicability is a critical topic in machine learning. Authors have often raised concerns about such scholarship issues, which are aimed at improving the quality of the field. Recently, the graph representation learning field has attracted the attention of a wide research community, which resulted in a large stream of works. As such, several Graph Neural Network models have been developed to effectively tackle graph classification. However, experimental procedures often lack rigorousness and are hardly reproducible. Motivated by this, we provide an overview of common practices that should be avoided to fairly compare with the state of the art. To counter this troubling trend, we ran more than 47000 experiments in a controlled and uniform framework to re-evaluate five popular models across nine common benchmarks. Moreover, by comparing GNNs with structure-agnostic baselines we provide convincing evidence that, on some datasets, structural information has not been exploited yet. We believe that this work can contribute to the development of the graph learning field, by providing a much needed grounding for rigorous evaluations of graph classification models.


A Non-Negative Factorization approach to node pooling in Graph Convolutional Neural Networks

arXiv.org Artificial Intelligence

The paper discusses a pooling mechanism to induce subsam-pling in graph structured data and introduces it as a component of a graph convolutional neural network. The pooling mechanism builds on the Non-Negative Matrix Factorization (NMF) of a matrix representing node adjacency and node similarity as adaptively obtained through the vertices embedding learned by the model. Such mechanism is applied to obtain an incrementally coarser graph where nodes are adaptively pooled into communities based on the outcomes of the nonnegative factorization. The empirical analysis on graph classification benchmarks shows how such coarsening process yields significant improvements in the predictive performance of the model with respect to its non-pooled counterpart.


Bayesian Tensor Factorisation for Bottom-up Hidden Tree Markov Models

arXiv.org Machine Learning

Bottom-Up Hidden Tree Markov Model is a highly expressive model for tree-structured data. Unfortunately, it cannot be used in practice due to the intractable size of its state-transition matrix. We propose a new approximation which lies on the Tucker factorisation of tensors. The probabilistic interpretation of such approximation allows us to define a new probabilistic model for tree-structured data. Hence, we define the new approximated model and we derive its learning algorithm. Then, we empirically assess the effective power of the new model evaluating it on two different tasks. In both cases, our model outperforms the other approximated model known in the literature.


Measuring the effects of confounders in medical supervised classification problems: the Confounding Index (CI)

arXiv.org Machine Learning

Over the years, there has been growing interest in using Machine Learning techniques for biomedical data processing. When tackling these tasks, one needs to bear in mind that biomedical data depends on a variety of characteristics, such as demographic aspects (age, gender, etc) or the acquisition technology, which might be unrelated with the target of the analysis. In supervised tasks, failing to match the ground truth targets with respect to such characteristics, called confounders, may lead to very misleading estimates of the predictive performance. Many strategies have been proposed to handle confounders, ranging from data selection, to normalization techniques, up to the use of training algorithm for learning with imbalanced data. However, all these solutions require the confounders to be known a priori. To this aim, we introduce a novel index that is able to measure the confounding effect of a data attribute in a bias-agnostic way. This index can be used to quantitatively compare the confounding effects of different variables and to inform correction methods such as normalization procedures or ad-hoc-prepared learning algorithms. The effectiveness of this index is validated on both simulated data and real-world neuroimaging data.


Detecting Adversarial Examples through Nonlinear Dimensionality Reduction

arXiv.org Machine Learning

Deep neural networks are vulnerable to adversarial examples, i.e., carefully-perturbed inputs aimed to mislead classification. This work proposes a detection method based on combining non-linear dimensionality reduction and density estimation techniques. Our empirical findings show that the proposed approach is able to effectively detect adversarial examples crafted by non-adaptive attackers, i.e., not specifically tuned to bypass the detection method. Given our promising results, we plan to extend our analysis to adaptive attackers in future work.