Goto

Collaborating Authors

Results


Unsupervised learning for vascular heterogeneity assessment of glioblastoma based on magnetic resonance imaging: The Hemodynamic Tissue Signature

arXiv.org Artificial Intelligence

This thesis focuses on the research and development of the Hemodynamic Tissue Signature (HTS) method: an unsupervised machine learning approach to describe the vascular heterogeneity of glioblastomas by means of perfusion MRI analysis. The HTS builds on the concept of habitats. An habitat is defined as a sub-region of the lesion with a particular MRI profile describing a specific physiological behavior. The HTS method delineates four habitats within the glioblastoma: the High Angiogenic Tumor (HAT) habitat, as the most perfused region of the enhancing tumor; the Low Angiogenic Tumor (LAT) habitat, as the region of the enhancing tumor with a lower angiogenic profile; the potentially Infiltrated Peripheral Edema (IPE) habitat, as the non-enhancing region adjacent to the tumor with elevated perfusion indexes; and the Vasogenic Peripheral Edema (VPE) habitat, as the remaining edema of the lesion with the lowest perfusion profile. The results of this thesis have been published in ten scientific contributions, including top-ranked journals and conferences in the areas of Medical Informatics, Statistics and Probability, Radiology & Nuclear Medicine, Machine Learning and Data Mining and Biomedical Engineering. An industrial patent registered in Spain (ES201431289A), Europe (EP3190542A1) and EEUU (US20170287133A1) was also issued, summarizing the efforts of the thesis to generate tangible assets besides the academic revenue obtained from research publications. Finally, the methods, technologies and original ideas conceived in this thesis led to the foundation of ONCOANALYTICS CDX, a company framed into the business model of companion diagnostics for pharmaceutical compounds, thought as a vehicle to facilitate the industrialization of the ONCOhabitats technology.


TaBooN -- Boolean Network Synthesis Based on Tabu Search

arXiv.org Artificial Intelligence

Recent developments in Omics-technologies revolutionized the investigation of biology by producing molecular data in multiple dimensions and scale. This breakthrough in biology raises the crucial issue of their interpretation based on modelling. In this undertaking, network provides a suitable framework for modelling the interactions between molecules. Basically a Biological network is composed of nodes referring to the components such as genes or proteins, and the edges/arcs formalizing interactions between them. The evolution of the interactions is then modelled by the definition of a dynamical system. Among the different categories of network, the Boolean network offers a reliable qualitative framework for the modelling. Automatically synthesizing a Boolean network from experimental data therefore remains a necessary but challenging issue. In this study, we present taboon, an original work-flow for synthesizing Boolean Networks from biological data. The methodology uses the data in the form of Boolean profiles for inferring all the potential local formula inference. They combine to form the model space from which the most truthful model with regards to biological knowledge and experiments must be found. In the taboon work-flow the selection of the fittest model is achieved by a Tabu-search algorithm. taboon is an automated method for Boolean Network inference from experimental data that can also assist to evaluate and optimize the dynamic behaviour of the biological networks providing a reliable platform for further modelling and predictions.


Bounded Fuzzy Possibilistic Method of Critical Objects Processing in Machine Learning

arXiv.org Artificial Intelligence

Unsatisfying accuracy of learning methods is mostly caused by omitting the influence of important parameters such as membership assignments, type of data objects, and distance or similarity functions. The proposed method, called Bounded Fuzzy Possibilistic Method (BFPM) addresses different issues that previous clustering or classification methods have not sufficiently considered in their membership assignments. In fuzzy methods, the object's memberships should sum to 1. Hence, any data object may obtain full membership in at most one cluster or class. Possibilistic methods relax this condition, but the method can be satisfied with the results even if just an arbitrary object obtains the membership from just one cluster, which prevents the objects' movement analysis. Whereas, BFPM differs from previous fuzzy and possibilistic approaches by removing these restrictions. Furthermore, BFPM provides the flexible search space for objects' movement analysis. Data objects are also considered as fundamental keys in learning methods, and knowing the exact type of objects results in providing a suitable environment for learning algorithms. The Thesis introduces a new type of object, called critical, as well as categorizing data objects into two different categories: structural-based and behavioural-based. Critical objects are considered as causes of miss-classification and miss-assignment in learning procedures. The Thesis also proposes new methodologies to study the behaviour of critical objects with the aim of evaluating objects' movements (mutation) from one cluster or class to another. The Thesis also introduces a new type of feature, called dominant, that is considered as one of the causes of miss-classification and miss-assignments. Then the Thesis proposes new sets of similarity functions, called Weighted Feature Distance (WFD) and Prioritized Weighted Feature Distance (PWFD).


A robust algorithm for explaining unreliable machine learning survival models using the Kolmogorov-Smirnov bounds

arXiv.org Machine Learning

A new robust algorithm based of the explanation method SurvLIME called SurvLIME-KS is proposed for explaining machine learning survival models. The algorithm is developed to ensure robustness to cases of a small amount of training data or outliers of survival data. The first idea behind SurvLIME-KS is to apply the Cox proportional hazards model to approximate the black-box survival model at the local area around a test example due to the linear relationship of covariates in the model. The second idea is to incorporate the well-known Kolmogorov-Smirnov bounds for constructing sets of predicted cumulative hazard functions. As a result, the robust maximin strategy is used, which aims to minimize the average distance between cumulative hazard functions of the explained black-box model and of the approximating Cox model, and to maximize the distance over all cumulative hazard functions in the interval produced by the Kolmogorov-Smirnov bounds. The maximin optimization problem is reduced to the quadratic program. Various numerical experiments with synthetic and real datasets demonstrate the SurvLIME-KS efficiency.


Adaptive Covariate Acquisition for Minimizing Total Cost of Classification

arXiv.org Machine Learning

In some applications, acquiring covariates comes at a cost which is not negligible. For example in the medical domain, in order to classify whether a patient has diabetes or not, measuring glucose tolerance can be expensive. Assuming that the cost of each covariate, and the cost of misclassification can be specified by the user, our goal is to minimize the (expected) total cost of classification, i.e. the cost of misclassification plus the cost of the acquired covariates. We formalize this optimization goal using the (conditional) Bayes risk and describe the optimal solution using a recursive procedure. Since the procedure is computationally infeasible, we consequently introduce two assumptions: (1) the optimal classifier can be represented by a generalized additive model, (2) the optimal sets of covariates are limited to a sequence of sets of increasing size. We show that under these two assumptions, a computationally efficient solution exists. Furthermore, on several medical datasets, we show that the proposed method achieves in most situations the lowest total costs when compared to various previous methods. Finally, we weaken the requirement on the user to specify all misclassification costs by allowing the user to specify the minimally acceptable recall (target recall). Our experiments confirm that the proposed method achieves the target recall while minimizing the false discovery rate and the covariate acquisition costs better than previous methods.


Structural Information Learning Machinery: Learning from Observing, Associating, Optimizing, Decoding, and Abstracting

arXiv.org Artificial Intelligence

In the present paper, we propose the model of {\it structural information learning machines} (SiLeM for short), leading to a mathematical definition of learning by merging the theories of computation and information. Our model shows that the essence of learning is {\it to gain information}, that to gain information is {\it to eliminate uncertainty} embedded in a data space, and that to eliminate uncertainty of a data space can be reduced to an optimization problem, that is, an {\it information optimization problem}, which can be realized by a general {\it encoding tree method}. The principle and criterion of the structural information learning machines are maximization of {\it decoding information} from the data points observed together with the relationships among the data points, and semantical {\it interpretation} of syntactical {\it essential structure}, respectively. A SiLeM machine learns the laws or rules of nature. It observes the data points of real world, builds the {\it connections} among the observed data and constructs a {\it data space}, for which the principle is to choose the way of connections of data points so that the {\it decoding information} of the data space is maximized, finds the {\it encoding tree} of the data space that minimizes the dynamical uncertainty of the data space, in which the encoding tree is hence referred to as a {\it decoder}, due to the fact that it has already eliminated the maximum amount of uncertainty embedded in the data space, interprets the {\it semantics} of the decoder, an encoding tree, to form a {\it knowledge tree}, extracts the {\it remarkable common features} for both semantical and syntactical features of the modules decoded by a decoder to construct {\it trees of abstractions}, providing the foundations for {\it intuitive reasoning} in the learning when new data are observed.


MM for Penalized Estimation

arXiv.org Machine Learning

Penalized estimation can conduct variable selection and parameter estimation simultaneously. The general framework is to minimize a loss function subject to a penalty designed to generate sparse variable selection. Much of the previous work have focused on convex loss functions including generalized linear models. When data are contaminated with noise, robust loss functions are typically introduced. Recent literature has witnessed a growing impact of nonconvex loss-based methods, which can generate robust estimation for data contaminated with outliers. This article investigates robust variable selection based on penalized nonconvex loss functions. We investigate properties of the local and global minimizers of the original penalized loss function and the surrogate penalized loss function induced by the majorization-minimization (MM) algorithm for numerical computation. We establish convergence theory of the proposed MM algorithm for penalized convex and nonconvex loss functions. Performance of the proposed algorithms for regression and classification problems are evaluated on simulated and real data including healthcare costs and cancer clinical status. Efficient implementations of the algorithms are available in the R package mpath in CRAN.


Integrative Generalized Convex Clustering Optimization and Feature Selection for Mixed Multi-View Data

arXiv.org Machine Learning

In mixed multi-view data, multiple sets of diverse features are measured on the same set of samples. By integrating all available data sources, we seek to discover common group structure among the samples that may be hidden in individualistic cluster analyses of a single data-view. While several techniques for such integrative clustering have been explored, we propose and develop a convex formalization that will inherit the strong statistical, mathematical and empirical properties of increasingly popular convex clustering methods. Specifically, our Integrative Generalized Convex Clustering Optimization (iGecco) method employs different convex distances, losses, or divergences for each of the different data views with a joint convex fusion penalty that leads to common groups. Additionally, integrating mixed multi-view data is often challenging when each data source is high-dimensional. To perform feature selection in such scenarios, we develop an adaptive shifted group-lasso penalty that selects features by shrinking them towards their loss-specific centers. Our so-called iGecco+ approach selects features from each data-view that are best for determining the groups, often leading to improved integrative clustering. To fit our model, we develop a new type of generalized multi-block ADMM algorithm using sub-problem approximations that more efficiently fits our model for big data sets. Through a series of numerical experiments and real data examples on text mining and genomics, we show that iGecco+ achieves superior empirical performance for high-dimensional mixed multi-view data.


Matrix Normal PCA for Interpretable Dimension Reduction and Graphical Noise Modeling

arXiv.org Machine Learning

Principal component analysis (PCA) is one of the most widely used dimension reduction and multivariate statistical techniques. From a probabilistic perspective, PCA seeks a low-dimensional representation of data in the presence of independent identical Gaussian noise. Probabilistic PCA (PPCA) and its variants have been extensively studied for decades. Most of them assume the underlying noise follows a certain independent identical distribution. However, the noise in the real world is usually complicated and structured. To address this challenge, some non-linear variants of PPCA have been proposed. But those methods are generally difficult to interpret. To this end, we propose a powerful and intuitive PCA method (MN-PCA) through modeling the graphical noise by the matrix normal distribution, which enables us to explore the structure of noise in both the feature space and the sample space. MN-PCA obtains a low-rank representation of data and the structure of noise simultaneously. And it can be explained as approximating data over the generalized Mahalanobis distance. We develop two algorithms to solve this model: one maximizes the regularized likelihood, the other exploits the Wasserstein distance, which is more robust. Extensive experiments on various data demonstrate their effectiveness.


ART: A machine learning Automated Recommendation Tool for synthetic biology

arXiv.org Machine Learning

Synthetic biology allows us to bioengineer cells to synthesize novel valuable molecules such as renewable biofuels or anticancer drugs. However, traditional synthetic biology approaches involve ad-hoc non systematic engineering practices, which lead to long development times. Here, we present the Automated Recommendation Tool ( ART), a tool that leverages machine learning and probabilistic modeling techniques to guide synthetic biology in a systematic fashion, without the need for a full mechanistic understanding of the biological system. Using sampling-based optimization, ART provides a set of recommended strains to be built in the next engineering cycle, alongside probabilistic predictions of their production levels. We demonstrate the capabilities of ART on simulated and real data sets and discuss possible difficulties in achieving satisfactory predictive power. 2 Introduction Metabolic engineering 1 enables us to bioengineer cells to synthesize novel valuable molecules such as renewable biofuels 2,3 or anticancer drugs.