Goto

Collaborating Authors

 indian buffet process



Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This paper defines a joint generative model of an image and its annotated text which is used to learn a bit vector representation for large scale image retrieval. An Indian Buffet Process is used to learn the length of the bit vector. The method is compared favourably to several widely used techniques. Quality ======= It is good to see a retrieval paper constructed around a well-defined probabilistic model.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. Spectral methods are based on decomposing moment tensors. If data are generated from latent variable models, their empirical moment tensors have a kind of (approximate) low-rank decomposition (approximate due to both noisy observations and finite sample estimation error in the empirical moments). These decompositions can be computed from the moment tensors, e.g. using a kind of power iteration method on related (symmetrized) tensors. The basic ideas of these spectral methods are fairly well established and many examples have been explored, especially in [6]. This paper applies spectral fitting methods to data modeled as being generated from an IBP, including two common emission models (a linear gaussian model and a sparse factor analysis model, described in Section 2). The main ingredients are a calculation of the appropriate moment tensors and corresponding symmetrized versions (Section 3), an application of the standard tensor power decomposition method (Section 4), and concentration proofs to offer recovery guarantees when data are generated from the model (Section 5).


The Kernel Beta Process

Lu Ren, Yingjian Wang, Lawrence Carin, David B. Dunson

Neural Information Processing Systems

A new Lévy process prior is proposed for an uncountable collection of covariatedependent feature-learning measures; the model is called the kernel beta process (KBP). Available covariates are handled efficiently via the kernel construction, with covariates assumed observed with each data sample ("customer"), and latent covariates learned for each feature ("dish"). Each customer selects dishes from an infinite buffet, in a manner analogous to the beta process, with the added constraint that a customer first decides probabilistically whether to "consider" a dish, based on the distance in covariate space between the customer and dish. If a customer does consider a particular dish, that dish is then selected probabilistically as in the beta process. The beta process is recovered as a limiting case of the KBP. An efficient Gibbs sampler is developed for computations, and state-of-the-art results are presented for image processing and music analysis tasks.


Spectral Methods for Indian Buffet Process Inference

Hsiao-Yu Tung, Alexander J. Smola

Neural Information Processing Systems

The Indian Buffet Process is a versatile statistical tool for modeling distributions over binary matrices. We provide an efficient spectral algorithm as an alternative to costly Variational Bayes and sampling-based algorithms. We derive a novel tensorial characterization of the moments of the Indian Buffet Process proper and for two of its applications. We give a computationally efficient iterative inference algorithm, concentration of measure bounds, and reconstruction guarantees. Our algorithm provides superior accuracy and cheaper computation than comparable Variational Bayesian approach on a number of reference problems.


Bayesian Nonparametrics: An Alternative to Deep Learning

Moraffah, Bahman

arXiv.org Machine Learning

Bayesian nonparametric models offer a flexible and powerful framework for statistical model selection, enabling the adaptation of model complexity to the intricacies of diverse datasets. This survey intends to delve into the significance of Bayesian nonparametrics, particularly in addressing complex challenges across various domains such as statistics, computer science, and electrical engineering. By elucidating the basic properties and theoretical foundations of these nonparametric models, this survey aims to provide a comprehensive understanding of Bayesian nonparametrics and their relevance in addressing complex problems, particularly in the domain of multi-object tracking. Through this exploration, we uncover the versatility and efficacy of Bayesian nonparametric methodologies, paving the way for innovative solutions to intricate challenges across diverse disciplines.


Inferring Interaction Networks using the IBP applied to microRNA Target Prediction

Neural Information Processing Systems

Determining interactions between entities and the overall organization and clustering of nodes in networks is a major challenge when analyzing biological and social network data. Here we extend the Indian Buffet Process (IBP), a nonparametric Bayesian model, to integrate noisy interaction scores with properties of individual entities for inferring interaction networks and clustering nodes within these networks. We present an application of this method to study how microR-NAs regulate mRNAs in cells. Analysis of synthetic and real data indicates that the method improves upon prior methods, correctly recovers interactions and clusters, and provides accurate biological predictions.


The Kernel Beta Process Duke University Durham, NC27708

Neural Information Processing Systems

A new Lévy process prior is proposed for an uncountable collection of covariatedependent feature-learning measures; the model is called the kernel beta process (KBP). Available covariates are handled efficiently via the kernel construction, with covariates assumed observed with each data sample ("customer"), and latent covariates learned for each feature ("dish"). Each customer selects dishes from an infinite buffet, in a manner analogous to the beta process, with the added constraint that a customer first decides probabilistically whether to "consider" a dish, based on the distance in covariate space between the customer and dish. If a customer does consider a particular dish, that dish is then selected probabilistically as in the beta process. The beta process is recovered as a limiting case of the KBP. An efficient Gibbs sampler is developed for computations, and state-of-the-art results are presented for image processing and music analysis tasks.


Bayesian Nonparametric Modeling of Suicide Attempts

Neural Information Processing Systems

The National Epidemiologic Survey on Alcohol and Related Conditions (NE-SARC) database contains a large amount of information, regarding the way of life, medical conditions, etc., of a representative sample of the U.S. population. In this paper, we are interested in seeking the hidden causes behind the suicide attempts, for which we propose to model the subjects using a nonparametric latent model based on the Indian Buffet Process (IBP). Due to the nature of the data, we need to adapt the observation model for discrete random variables. We propose a generative model in which the observations are drawn from a multinomial-logit distribution given the IBP matrix. The implementation of an efficient Gibbs sampler is accomplished using the Laplace approximation, which allows integrating out the weighting factors of the multinomial-logit likelihood model. Finally, the experiments over the NESARC database show that our model properly captures some of the hidden causes that model suicide attempts.


Restricting exchangeable nonparametric distributions

Neural Information Processing Systems

Distributions over matrices with exchangeable rows and infinitely many columns are useful in constructing nonparametric latent variable models. However, the distribution implied by such models over the number of features exhibited by each data point may be poorly-suited for many modeling tasks. In this paper, we propose a class of exchangeable nonparametric priors obtained by restricting the domain of existing models. Such models allow us to specify the distribution over the number of features per data point, and can achieve better performance on data sets where the number of features is not well-modeled by the original distribution.