Mixture modeling on related samples by $\psi$-stick breaking and kernel perturbation

arXiv.org Machine Learning

There has been great interest recently in applying nonparametric kernel mixtures in a hierarchical manner to model multiple related data samples jointly. In such settings several data features are commonly present: (i) the related samples often share some, if not all, of the mixture components but with differing weights, (ii) only some, not all, of the mixture components vary across the samples, and (iii) often the shared mixture components across samples are not aligned perfectly in terms of their location and spread, but rather display small misalignments either due to systematic cross-sample difference or more often due to uncontrolled, extraneous causes. Properly incorporating these features in mixture modeling will enhance the efficiency of inference, whereas ignoring them not only reduces efficiency but can jeopardize the validity of the inference due to issues such as confounding. We introduce two techniques for incorporating these features in modeling related data samples using kernel mixtures. The first technique, called $\psi$-stick breaking, is a joint generative process for the mixing weights through the breaking of both a stick shared by all the samples for the components that do not vary in size across samples and an idiosyncratic stick for each sample for those components that do vary in size. The second technique is to imbue random perturbation into the kernels, thereby accounting for cross-sample misalignment. These techniques can be used either separately or together in both parametric and nonparametric kernel mixtures. We derive efficient Bayesian inference recipes based on MCMC sampling for models featuring these techniques, and illustrate their work through both simulated data and a real flow cytometry data set in prediction/estimation, cross-sample calibration, and testing multi-sample differences.


Sequential Embedding Induced Text Clustering, a Non-parametric Bayesian Approach

arXiv.org Machine Learning

Current state-of-the-art nonparametric Bayesian text clustering methods model documents through multinomial distribution on bags of words. Although these methods can effectively utilize the word burstiness representation of documents and achieve decent performance, they do not explore the sequential information of text and relationships among synonyms. In this paper, the documents are modeled as the joint of bags of words, sequential features and word embeddings. We proposed Sequential Embedding induced Dirichlet Process Mixture Model (SiDPMM) to effectively exploit this joint document representation in text clustering. The sequential features are extracted by the encoder-decoder component. Word embeddings produced by the continuous-bag-of-words (CBOW) model are introduced to handle synonyms. Experimental results demonstrate the benefits of our model in two major aspects: 1) improved performance across multiple diverse text datasets in terms of the normalized mutual information (NMI); 2) more accurate inference of ground truth cluster numbers with regularization effect on tiny outlier clusters.


Flow Cytometry Data Analysis: Comparing Large Multivariate Data Sets Using Classification Trees

AAAI Conferences

This paper describes a method to compare flow cytometry data sets, which typically contain 50,000 six-parameter measurements each. By this method, the data points in two such data sets are divided into subpopulations using a binary classification tree generated from the data. "l he) 2 test is then used to establish the homogeneity of the two data sets based on how their data are distributed across these subpopulations. Preliminary results indicate that this comparison method is sufficiently sensitive to detect differences between flow cytometry data sets that are too subtle for human investigators to notice.


Time-Varying Clusters in Large-Scale Flow Cytometry

AAAI Conferences

Flow cytometers measure the optical properties of particles to classify microbes. Recent innovations have allowed oceanographers to collect flow cytometry data continuously during research cruises, leading to an explosion of data and new challenges for the classification task.The massive scale, time-varying underlying populations, and noisy measurements motivate the development of new classification methods. We describe the problem, the data, and some preliminary results demonstratingthe difficulty with conventional methods.


Mondrian Processes for Flow Cytometry Analysis

arXiv.org Machine Learning

Analysis of flow cytometry data is an essential tool for clinical diagnosis of hematological and immunological conditions. Current clinical workflows rely on a manual process called gating to classify cells into their canonical types. This dependence on human annotation limits the rate, reproducibility, and complexity of flow cytometry analysis. In this paper, we propose using Mondrian processes to perform automated gating by incorporating prior information of the kind used by gating technicians. The method segments cells into types via Bayesian nonparametric trees. Examining the posterior over trees allows for interpretable visualizations and uncertainty quantification - two vital qualities for implementation in clinical practice.