Goto

Collaborating Authors

 covariance kernel



Bayesian neural networks with interpretable priors from Mercer kernels

Alberts, Alex, Bilionis, Ilias

arXiv.org Machine Learning

Quantifying the uncertainty in the output of a neural network is essential for deployment in scientific or engineering applications where decisions must be made under limited or noisy data. Bayesian neural networks (BNNs) provide a framework for this purpose by constructing a Bayesian posterior distribution over the network parameters. However, the prior, which is of key importance in any Bayesian setting, is rarely meaningful for BNNs. This is because the complexity of the input-to-output map of a BNN makes it difficult to understand how certain distributions enforce any interpretable constraint on the output space. Gaussian processes (GPs), on the other hand, are often preferred in uncertainty quantification tasks due to their interpretability. The drawback is that GPs are limited to small datasets without advanced techniques, which often rely on the covariance kernel having a specific structure. To address these challenges, we introduce a new class of priors for BNNs, called Mercer priors, such that the resulting BNN has samples which approximate that of a specified GP. The method works by defining a prior directly over the network parameters from the Mercer representation of the covariance kernel, and does not rely on the network having a specific structure. In doing so, we can exploit the scalability of BNNs in a meaningful Bayesian way.



GP Kernels for Cross-Spectrum Analysis

Kyle R. Ulrich, David E. Carlson, Kafui Dzirasa, Lawrence Carin

Neural Information Processing Systems

Multi-output Gaussian processes provide a convenient framework for multi-task problems. An illustrative and motivating example of a multi-task problem is multi-region electrophysiological time-series data, where experimentalists are interested in both power and phase coherence between channels. Recently, Wilson and Adams (2013) proposed the spectral mixture (SM) kernel to model the spectral density of a single task in a Gaussian process framework. In this paper, we develop a novel covariance kernel for multiple outputs, called the cross-spectral mixture (CSM) kernel. This new, flexible kernel represents both the power and phase relationship between multiple observation channels. We demonstrate the expressive capabilities of the CSM kernel through implementation of a Bayesian hidden Markov model, where the emission distribution is a multi-output Gaussian process with a CSM covariance kernel. Results are presented for measured multi-region electrophysiological data.


Kernel Model Validation: How To Do It, And Why You Should Care

Graziani, Carlo, Ngom, Marieme

arXiv.org Machine Learning

Gaussian Process (GP) models are popular tools in uncertainty quantification (UQ) because they purport to furnish functional uncertainty estimates that can be used to represent model uncertainty . It is often difficult to state with precision what probabilistic interpretation attaches to such an uncertainty, and in what way is it calibrated. Without such a calibration statement, the value of such uncertainty estimates is quite limited and qualitative. We motivate the importance of proper probabilistic calibration of GP predictions by describing how GP predictive calibration failures can cause degraded convergence properties in a target optimization algorithm called T argeted Adaptive Design (T AD). We discuss the interpretation of GP-generated uncertainty intervals in UQ, and how one may learn to trust them, through a formal procedure for covariance kernel validation that exploits the multivariate normal nature of GP predictions. We give simple examples of GP regression misspecified 1-dimensional models, and discuss the situation with respect to higher-dimensional models.



GP Kernels for Cross-Spectrum Analysis

Neural Information Processing Systems

Multi-output Gaussian processes provide a convenient framework for multi-task problems. An illustrative and motivating example of a multi-task problem is multi-region electrophysiological time-series data, where experimentalists are interested in both power and phase coherence between channels. Recently, Wilson and Adams (2013) proposed the spectral mixture (SM) kernel to model the spectral density of a single task in a Gaussian process framework. In this paper, we develop a novel covariance kernel for multiple outputs, called the cross-spectral mixture (CSM) kernel. This new, flexible kernel represents both the power and phase relationship between multiple observation channels. We demonstrate the expressive capabilities of the CSM kernel through implementation of a Bayesian hidden Markov model, where the emission distribution is a multi-output Gaussian process with a CSM covariance kernel. Results are presented for measured multi-region electrophysiological data.


Discrete Gaussian Vector Fields On Meshes

Gillan, Michael, Siegert, Stefan, Youngman, Ben

arXiv.org Machine Learning

Though the underlying fields associated with vector-valued environmental data are continuous, observations themselves are discrete. For example, climate models typically output grid-based representations of wind fields or ocean currents, and these are often downscaled to a discrete set of points. By treating the area of interest as a two-dimensional manifold that can be represented as a triangular mesh and embedded in Euclidean space, this work shows that discrete intrinsic Gaussian processes for vector-valued data can be developed from discrete differential operators defined with respect to a mesh. These Gaussian processes account for the geometry and curvature of the manifold whilst also providing a flexible and practical formulation that can be readily applied to any two-dimensional mesh. We show that these models can capture harmonic flows, incorporate boundary conditions, and model non-stationary data. Finally, we apply these models to downscaling stationary and non-stationary gridded wind data on the globe, and to inference of ocean currents from sparse observations in bounded domains.


Adaptive finite element type decomposition of Gaussian processes

Kim, Jaehoan, Bhattacharya, Anirban, Pati, Debdeep

arXiv.org Machine Learning

In this paper, we investigate a class of approximate Gaussian processes (GP) obtained by taking a linear combination of compactly supported basis functions with the basis coefficients endowed with a dependent Gaussian prior distribution. This general class includes a popular approach that uses a finite element approximation of the stochastic partial differential equation (SPDE) associated with Matérn GP. We explored another scalable alternative popularly used in the computer emulation literature where the basis coefficients at a lattice are drawn from a Gaussian process with an inverse-Gamma bandwidth. For both approaches, we study concentration rates of the posterior distribution. We demonstrated that the SPDE associated approach with a fixed smoothness parameter leads to a suboptimal rate despite how the number of basis functions and bandwidth are chosen when the underlying true function is sufficiently smooth. On the flip side, we showed that the later approach is rate-optimal adaptively over all smoothness levels of the underlying true function if an appropriate prior is placed on the number of basis functions. Efficient computational strategies are developed and numerics are provided to illustrate the theoretical results.


Stochastic Processes with Modified Lognormal Distribution Featuring Flexible Upper Tail

Hristopulos, Dionissios T., Baxevani, Anastassia, Kaniadakis, Giorgio

arXiv.org Machine Learning

Asymmetric, non-Gaussian probability distributions are often observed in the analysis of natural and engineering datasets. The lognormal distribution is a standard model for data with skewed frequency histograms and fat tails. However, the lognormal law severely restricts the asymptotic dependence of the probability density and the hazard function for high values. Herein we present a family of three-parameter non-Gaussian probability density functions that are based on generalized kappa-exponential and kappa-logarithm functions and investigate its mathematical properties. These kappa-lognormal densities represent continuous deformations of the lognormal with lighter right tails, controlled by the parameter kappa. In addition, bimodal distributions are obtained for certain parameter combinations. We derive closed-form analytic expressions for the main statistical functions of the kappa-lognormal distribution. For the moments, we derive bounds that are based on hypergeometric functions as well as series expansions. Explicit expressions for the gradient and Hessian of the negative log-likelihood are obtained to facilitate numerical maximum-likelihood estimates of the kappa-lognormal parameters from data. We also formulate a joint probability density function for kappa-lognormal stochastic processes by applying Jacobi's multivariate theorem to a latent Gaussian process. Estimation of the kappa-lognormal distribution based on synthetic and real data is explored. Furthermore, we investigate applications of kappa-lognormal processes with different covariance kernels in time series forecasting and spatial interpolation using warped Gaussian process regression. Our results are of practical interest for modeling skewed distributions in various scientific and engineering fields.