Goto

Collaborating Authors

Bayesian Inference and Learning in Gaussian Process State-Space Models with Particle MCMC

Neural Information Processing Systems

State-space models are successfully used in many areas of science, engineering and economics to model time series and dynamical systems. We present a fully Bayesian approach to inference and learning in nonlinear nonparametric state-space models. We place a Gaussian process prior over the transition dynamics, resulting in a flexible model able to capture complex dynamical phenomena. However, to enable efficient inference, we marginalize over the dynamics of the model and instead infer directly the joint smoothing distribution through the use of specially tailored Particle Markov Chain Monte Carlo samplers. Once an approximation of the smoothing distribution is computed, the state transition predictive distribution can be formulated analytically. We make use of sparse Gaussian process models to greatly reduce the computational complexity of the approach.


Variational Gaussian Process State-Space Models

Neural Information Processing Systems

State-space models have been successfully used for more than fifty years in different areas of science and engineering. We present a procedure for efficient variational Bayesian learning of nonlinear state-space models based on sparse Gaussian processes. The result of learning is a tractable posterior over nonlinear dynamical systems. In comparison to conventional parametric models, we offer the possibility to straightforwardly trade off model capacity and computational cost whilst avoiding overfitting. Our main algorithm uses a hybrid inference approach combining variational Bayes and sequential Monte Carlo. We also present stochastic variational inference and online learning approaches for fast learning with long time series.


Variational Gaussian Process State-Space Models

arXiv.org Machine Learning

State-space models have been successfully used for more than fifty years in different areas of science and engineering. We present a procedure for efficient variational Bayesian learning of nonlinear state-space models based on sparse Gaussian processes. The result of learning is a tractable posterior over nonlinear dynamical systems. In comparison to conventional parametric models, we offer the possibility to straightforwardly trade off model capacity and computational cost whilst avoiding overfitting. Our main algorithm uses a hybrid inference approach combining variational Bayes and sequential Monte Carlo. We also present stochastic variational inference and online learning approaches for fast learning with long time series.


A Bayesian Nonparametric Approach to Modeling Mobility Patterns

AAAI Conferences

Constructing models of mobile agents can be difficult without domain-specific knowledge. Parametric models flexible enough to capture all mobility patterns that an expert believes are possible are often large, requiring a great deal of training data. In contrast, nonparametric models are extremely flexible and can generalize well with relatively little training data. We propose modeling the mobility patterns of moving agents as a mixture of Gaussian processes (GP) with a Dirichlet process (DP) prior over mixture weights. The GP provides a flexible representation for each individual mobility pattern, while the DP assigns observed trajectories to particular mobility patterns. Both the GPs and the DP adjust the model's complexity based on available data, implicitly avoiding issues of over-fitting or under-fitting. We apply our model to a helicopter-based tracking task, where the mobility patterns of the tracked agents — cars — are learned from real data collected from taxis in the greater Boston area.


Fitting Gaussian Process Models in Python

#artificialintelligence

A common applied statistics task involves building regression models to characterize non-linear relationships between variables. When we write a function that takes continuous values as inputs, we are essentially implying an infinite vector that only returns values (indexed by the inputs) when the function is called upon to do so. To make this notion of a "distribution over functions" more concrete, let's quickly demonstrate how we obtain realizations from a Gaussian process, which result in an evaluation of a function over a set of points. We are going generate realizations sequentially, point by point, using the lovely conditioning property of mutlivariate Gaussian distributions.