Learning non-Gaussian Time Series using the Box-Cox Gaussian Process

Rios, Gonzalo, Tobar, Felipe

arXiv.org Machine Learning 

A Gaussian process (GP) [1] is a prior distribution over functions with a support that includes a wide class of phenomena via the design of its mean and covariance functions, the parameters of which provide meaningful interpretation of the process at hand. Beyond regression [2], GPs have been extensively used in the last two decades for classification [3], density estimation [4], filter design [5], model identification [6] and optimisation [7]. In general terms, all these generative models have two stages: The latent process is modelled as a GP and the observation is modelled (conditional to the latent process) as a non-Gaussian variable. This class of models is referred to as GP with non-Gaussian likelihood, or as Generalised GPs. These usually consider likelihood functions from the exponential family such as the Laplace, Poisson, beta and gamma distributions [8]. A well-known example is the GP classification model, where the classes are represented by the output of an activation neuron into which a latent GP is fed. A slightly different approach to non-Gaussian models, which is not constrained to the exponential family, is the warped GP (WGP, [9]). The WGP models non-Gaussian data by assuming that there is a transformation φ such that the observations can be passed through φ to yield a GP, therefore, the likelihood function of this model is not designed directly but, rather, induced by the transformation (a.k.a.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found