MOGPTK: The Multi-Output Gaussian Process Toolkit
de Wolff, Taco, Cuevas, Alejandro, Tobar, Felipe
GPs are designed through parametrizing a covariance kernel, meaning that constructing expressive kernels allows for an improved representation of complex signals. Recent advances extend the GP concept to multiple series (or channels), where both auto-correlations and cross-correlations among channels are designed jointly; we refer to these models as multi-output GP (MOGP) models. A key attribute of MOGPs is that appropriate cross-correlations allow for improved data-imputation and prediction tasks when the channels have missing data. Popular MOGP models include: i) the Linear Model of Coregionalization (LMC) [2], ii) the Cross-Spectral Mixture (CSM) [3], iii) the Convolutional Model (CONV) [4], and iv) the Multi-Output Spectral Mixture (MOSM) [5]. Training MOGPs is challenging due to the large number of parameters required to model all the cross-correlations, and the fact that most of MOGP models are parametrized in the spectral domain, thus being prone to local minima. Therefore, a unified framework that implements these MOGPs is required both by the the GP research community as well as by those interested in practical applications for multi-channel data.
Feb-9-2020
- Country:
- Europe
- Italy (0.04)
- United Kingdom > England
- Oxfordshire > Oxford (0.04)
- South America > Chile (0.04)
- Europe
- Genre:
- Research Report (0.50)
- Technology: