Goto

Collaborating Authors

Results


Adaptive Learning of Rank-One Models for Efficient Pairwise Sequence Alignment

arXiv.org Machine Learning

Pairwise alignment of DNA sequencing data is a ubiquitous task in bioinformatics and typically represents a heavy computational burden. State-of-the-art approaches to speed up this task use hashing to identify short segments (k-mers) that are shared by pairs of reads, which can then be used to estimate alignment scores. However, when the number of reads is large, accurately estimating alignment scores for all pairs is still very costly. Moreover, in practice, one is only interested in identifying pairs of reads with large alignment scores. In this work, we propose a new approach to pairwise alignment estimation based on two key new ingredients. The first ingredient is to cast the problem of pairwise alignment estimation under a general framework of rank-one crowdsourcing models, where the workers' responses correspond to k-mer hash collisions. These models can be accurately solved via a spectral decomposition of the response matrix. The second ingredient is to utilise a multi-armed bandit algorithm to adaptively refine this spectral estimator only for read pairs that are likely to have large alignments. The resulting algorithm iteratively performs a spectral decomposition of the response matrix for adaptively chosen subsets of the read pairs.


Three-stage intelligent support of clinical decision making for higher trust, validity, and explainability

arXiv.org Artificial Intelligence

The paper presents the approach for the building of consistent and applicable clinical decision support systems (CDSS) using a data-driven predictive model aimed to resolve a problem of low applicability and scalability of CDSS in real-world applications. The approach is based on the three-stage application of domain-specific and data-driven supportive procedures to integrate into clinical business-processes with higher trust and explainability of the prediction results and recommendations. Within the considered three stages, the regulatory policy, data-driven modes, and interpretation procedures are integrated to enable natural domain-specific interaction with decision-makers with sequential narrowing of the intelligent decision support focus. The proposed methodology enables a higher level of automation, scalability, and semantic interpretability of CDSS. The approach was implemented in software solutions and tested within a case study in T2DM prediction, enabling to improve known clinical scales (such as FINDRISK), keeping the problem-specific reasoning interface similar to existing applications. Such inheritance, together with the three-stages approach, provide higher compatibility of the solution and leads to trust, valid, and explainable application of data-driven solution in real-world cases.


A unified survey on treatment effect heterogeneity modeling and uplift modeling

arXiv.org Machine Learning

A central question in many fields of scientific research is to determine how an outcome would be affected by an action, or to measure the effect of an action (a.k.a treatment effect). In recent years, a need for estimating the heterogeneous treatment effects conditioning on the different characteristics of individuals has emerged from research fields such as personalized healthcare, social science, and online marketing. To meet the need, researchers and practitioners from different communities have developed algorithms by taking the treatment effect heterogeneity modeling approach and the uplift modeling approach, respectively. In this paper, we provide a unified survey of these two seemingly disconnected yet closely related approaches under the potential outcome framework. We then provide a structured survey of existing methods by emphasizing on their inherent connections with a set of unified notations to make comparisons of the different methods easy. We then review the main applications of the surveyed methods in personalized marketing, personalized medicine, and social studies. Finally, we summarize the existing software packages and present discussions based on the use of methods on synthetic, semi-synthetic and real world data sets and provide some general guidelines for choosing methods.


Doubly-Stochastic Normalization of the Gaussian Kernel is Robust to Heteroskedastic Noise

arXiv.org Machine Learning

A fundamental step in many data-analysis techniques is the construction of an affinity matrix describing similarities between data points. When the data points reside in Euclidean space, a widespread approach is to from an affinity matrix by the Gaussian kernel with pairwise distances, and to follow with a certain normalization (e.g. the row-stochastic normalization or its symmetric variant). We demonstrate that the doubly-stochastic normalization of the Gaussian kernel with zero main diagonal (i.e. no self loops) is robust to heteroskedastic noise. That is, the doubly-stochastic normalization is advantageous in that it automatically accounts for observations with different noise variances. Specifically, we prove that in a suitable high-dimensional setting where heteroskedastic noise does not concentrate too much in any particular direction in space, the resulting (doubly-stochastic) noisy affinity matrix converges to its clean counterpart with rate $m^{-1/2}$, where $m$ is the ambient dimension. We demonstrate this result numerically, and show that in contrast, the popular row-stochastic and symmetric normalizations behave unfavorably under heteroskedastic noise. Furthermore, we provide a prototypical example of simulated single-cell RNA sequence data with strong intrinsic heteroskedasticity, where the advantage of the doubly-stochastic normalization for exploratory analysis is evident.


Patient Similarity Analysis with Longitudinal Health Data

arXiv.org Machine Learning

Healthcare professionals have long envisioned using the enormous processing powers of computers to discover new facts and medical knowledge locked inside electronic health records. These vast medical archives contain time-resolved information about medical visits, tests and procedures, as well as outcomes, which together form individual patient journeys. By assessing the similarities among these journeys, it is possible to uncover clusters of common disease trajectories with shared health outcomes. The assignment of patient journeys to specific clusters may in turn serve as the basis for personalized outcome prediction and treatment selection. This procedure is a non-trivial computational problem, as it requires the comparison of patient data with multi-dimensional and multi-modal features that are captured at different times and resolutions. In this review, we provide a comprehensive overview of the tools and methods that are used in patient similarity analysis with longitudinal data and discuss its potential for improving clinical decision making.


Precision Medicine Informatics: Principles, Prospects, and Challenges

arXiv.org Artificial Intelligence

Precision Medicine (PM) is an emerging approach that appears with the impression of changing the existing paradigm of medical practice. Recent advances in technological innovations and genetics, and the growing availability of health data have set a new pace of the research and imposes a set of new requirements on different stakeholders. To date, some studies are available that discuss about different aspects of PM. Nevertheless, a holistic representation of those aspects deemed to confer the technological perspective, in relation to applications and challenges, is mostly ignored. In this context, this paper surveys advances in PM from informatics viewpoint and reviews the enabling tools and techniques in a categorized manner. In addition, the study discusses how other technological paradigms including big data, artificial intelligence, and internet of things can be exploited to advance the potentials of PM. Furthermore, the paper provides some guidelines for future research for seamless implementation and wide-scale deployment of PM based on identified open issues and associated challenges. To this end, the paper proposes an integrated holistic framework for PM motivating informatics researchers to design their relevant research works in an appropriate context.


Inverse Projection Representation and Category Contribution Rate for Robust Tumor Recognition

arXiv.org Machine Learning

Sparse representation based classification (SRC) methods have achieved remarkable results. SRC, however, still suffer from requiring enough training samples, insufficient use of test samples and instability of representation. In this paper, a stable inverse projection representation based classification (IPRC) is presented to tackle these problems by effectively using test samples. An IPR is firstly proposed and its feasibility and stability are analyzed. A classification criterion named category contribution rate is constructed to match the IPR and complete classification. Moreover, a statistical measure is introduced to quantify the stability of representation-based classification methods. Based on the IPRC technique, a robust tumor recognition framework is presented by interpreting microarray gene expression data, where a two-stage hybrid gene selection method is introduced to select informative genes. Finally, the functional analysis of candidate's pathogenicity-related genes is given. Extensive experiments on six public tumor microarray gene expression datasets demonstrate the proposed technique is competitive with state-of-the-art methods.


Are Clusterings of Multiple Data Views Independent?

arXiv.org Machine Learning

In the Pioneer 100 (P100) Wellness Project (Price and others, 2017), multiple types of data are collected on a single set of healthy participants at multiple timepoints in order to characterize and optimize wellness. One way to do this is to identify clusters, or subgroups, among the participants, and then to tailor personalized health recommendations to each subgroup. It is tempting to cluster the participants using all of the data types and timepoints, in order to fully exploit the available information. However, clustering the participants based on multiple data views implicitly assumes that a single underlying clustering of the participants is shared across all data views. If this assumption does not hold, then clustering the participants using multiple data views may lead to spurious results. In this paper, we seek to evaluate the assumption that there is some underlying relationship among the clusterings from the different data views, by asking the question: are the clusters within each data view dependent or independent? We develop a new test for answering this question, which we then apply to clinical, proteomic, and metabolomic data, across two distinct timepoints, from the P100 study. We find that while the subgroups of the participants defined with respect to any single data type seem to be dependent across time, the clustering among the participants based on one data type (e.g. proteomic data) appears not to be associated with the clustering based on another data type (e.g. clinical data).


Future Analytics Ecosystem Principles

#artificialintelligence

A little over a decade ago I had the great pleasure of hearing a commencement speech for my son. Eric Lander, the leader of the human genome project, described his journey through science and life. He shared that he did not have a clear direction as a mathematics major at Princeton, and the labyrinth of decisions that followed to get him into genetic biology. He also shared the process of science and its adoption into the culture and economy of the modern-day world. His punchline was that it takes a generation to understand and incorporate scientific discoveries into the economy and culture, as my father had once told me.


MSc Dissertation: Exclusive Row Biclustering for Gene Expression Using a Combinatorial Auction Approach

arXiv.org Machine Learning

The availability of large microarray data has led to a growing interest in biclustering methods in the past decade. Several algorithms have been proposed to identify subsets of genes and conditions according to different similarity measures and under varying constraints. In this paper we focus on the exclusive row biclustering problem for gene expression data sets, in which each row can only be a member of a single bicluster while columns can participate in multiple ones. This type of biclustering may be adequate, for example, for clustering groups of cancer patients where each patient (row) is expected to be carrying only a single type of cancer, while each cancer type is associated with multiple (and possibly overlapping) genes (columns). We present a novel method to identify these exclusive row biclusters through a combination of existing biclustering algorithms and combinatorial auction techniques. We devise an approach for tuning the threshold for our algorithm based on comparison to a null model in the spirit of the Gap statistic approach. We demonstrate our approach on both synthetic and real-world gene expression data and show its power in identifying large span non-overlapping rows sub matrices, while considering their unique nature. The Gap statistic approach succeeds in identifying appropriate thresholds in all our examples.