Goto

Collaborating Authors

 nonparametric


Nonparametric learning from Bayesian models with randomized objective functions

Neural Information Processing Systems

Bayesian learning is built on an assumption that the model space contains a true reflection of the data generating mechanism. This assumption is problematic, particularly in complex data environments. Here we present a Bayesian nonparametric approach to learning that makes use of statistical models, but does not assume that the model is true. Our approach has provably better properties than using a parametric model and admits a Monte Carlo sampling scheme that can afford massive scalability on modern computer architectures. The model-based aspect of learning is particularly attractive for regularizing nonparametric inference when the sample size is small, and also for correcting approximate approaches such as variational Bayes (VB). We demonstrate the approach on a number of examples including VB classifiers and Bayesian random forests.


Nonparametric learning from Bayesian models with randomized objective functions

Neural Information Processing Systems

Bayesian learning is built on an assumption that the model space contains a true reflection of the data generating mechanism. This assumption is problematic, particularly in complex data environments. Here we present a Bayesian nonparametric approach to learning that makes use of statistical models, but does not assume that the model is true. Our approach has provably better properties than using a parametric model and admits a Monte Carlo sampling scheme that can afford massive scalability on modern computer architectures. The model-based aspect of learning is particularly attractive for regularizing nonparametric inference when the sample size is small, and also for correcting approximate approaches such as variational Bayes (VB).


Reviews: Nonparametric learning from Bayesian models with randomized objective functions

Neural Information Processing Systems

The idea: You want to do Bayesian inference on a parameter theta, with prior pi(theta) and parametric likelihood f_theta, but you're not sure if the likelihood is correctly specified. So put a nonparametric prior on the sampling distribution: a mixture of Dirichlet processes centered at f_theta with mixing distribution pi(theta). The concentration parameter of the DP provides a sliding scale between vanilla Bayesian inference (total confidence in the parametric model) and Bayesian bootstrap (no confidence at all, use the empirical distribution). This is a simple idea, but the paper presents it lucidly and compellingly, beginning with a diverse list of potential applications: the method may be viewed as regularization of a nonparametric Bayesian model towards a parametric one; as robustification of a parametric Bayesian model to misspecification; as a means of correcting a variational approximation; or as nonparametric decision theory, when the log-likelihood is swapped out for an arbitrary utility function. As for implementation, the procedure requires (1) sampling from the parametric Bayesian posterior distribution and (2) performing a p-dimensional maximization, where p is the dimension of theta.


Nonparametric learning from Bayesian models with randomized objective functions

Lyddon, Simon, Walker, Stephen, Holmes, Chris C.

Neural Information Processing Systems

Bayesian learning is built on an assumption that the model space contains a true reflection of the data generating mechanism. This assumption is problematic, particularly in complex data environments. Here we present a Bayesian nonparametric approach to learning that makes use of statistical models, but does not assume that the model is true. Our approach has provably better properties than using a parametric model and admits a Monte Carlo sampling scheme that can afford massive scalability on modern computer architectures. The model-based aspect of learning is particularly attractive for regularizing nonparametric inference when the sample size is small, and also for correcting approximate approaches such as variational Bayes (VB).


Nearest-Neighbor Neural Networks for Geostatistics

Wang, Haoyu, Guan, Yawen, Reich, Brian J

arXiv.org Machine Learning

Kriging is the predominant method used for spatial prediction, but relies on the assumption that predictions are linear combinations of the observations. Kriging often also relies on additional assumptions such as normality and stationarity. We propose a more flexible spatial prediction method based on the Nearest-Neighbor Neural Network (4N) process that embeds deep learning into a geostatistical model. We show that the 4N process is a valid stochastic process and propose a series of new ways to construct features to be used as inputs to the deep learning model based on neighboring information. Our model framework outperforms some existing state-of-art geostatistical modelling methods for simulated non-Gaussian data and is applied to a massive forestry dataset. GPs are used directly to model Gaussian data and as the basis of non-Gaussian models such as generalized linear (e.g., Diggle et al., 1998), quantile regression (e.g., Lum et al., 2012; Reich, 2012) and spatial extremes (e.g., Cooley et al., 2007; Sang and Gelfand, 2010) models. Similarly, Kriging is the standard method for geostatistical prediction.


Multi-view Learning as a Nonparametric Nonlinear Inter-Battery Factor Analysis

Damianou, Andreas, Lawrence, Neil D., Ek, Carl Henrik

arXiv.org Machine Learning

Factor analysis aims to determine latent factors, or traits, which summarize a given data set. Inter-battery factor analysis extends this notion to multiple views of the data. In this paper we show how a nonlinear, nonparametric version of these models can be recovered through the Gaussian process latent variable model. This gives us a flexible formalism for multi-view learning where the latent variables can be used both for exploratory purposes and for learning representations that enable efficient inference for ambiguous estimation tasks. Learning is performed in a Bayesian manner through the formulation of a variational compression scheme which gives a rigorous lower bound on the log likelihood. Our Bayesian framework provides strong regularization during training, allowing the structure of the latent space to be determined efficiently and automatically. We demonstrate this by producing the first (to our knowledge) published results of learning from dozens of views, even when data is scarce.