Goto

Collaborating Authors

 process regression


Uniform Error Bounds for Gaussian Process Regression with Application to Safe Control

Armin Lederer, Jonas Umlauft, Sandra Hirche

Neural Information Processing Systems

Key to the application of such models in safety-critical domains is the quantification of their model error. Gaussian processes provide such a measure anduniform error bounds havebeen derived,which allowsafe control based on thesemodels.


415e1af7ea95f89f4e375162b21ae38c-Paper.pdf

Neural Information Processing Systems

The approximate posterior distributions of the latent variables are derived with variational inference, and the evidence lower bound is evaluated and optimized by the proposed recursive sampling scheme.




Active Learning for Manifold Gaussian Process Regression

Cheng, Yuanxing, Kang, Lulu, Wang, Yiwei, Liu, Chun

arXiv.org Machine Learning

This paper introduces an active learning framework for manifold Gaussian Process (GP) regression, combining manifold learning with strategic data selection to improve accuracy in high-dimensional spaces. Our method jointly optimizes a neural network for dimensionality reduction and a Gaussian process regressor in the latent space, supervised by an active learning criterion that minimizes global prediction error. Experiments on synthetic data demonstrate superior performance over randomly sequential learning. The framework efficiently handles complex, discontinuous functions while preserving computational tractability, offering practical value for scientific and engineering applications. Future work will focus on scalability and uncertainty-aware manifold learning.


Adaptive sparse variational approximations for Gaussian process regression

Nieman, Dennis, Szabó, Botond

arXiv.org Machine Learning

Department of Decision Sciences, Bocconi Institute for Data Science and Analytics, Bocconi University, Milan Abstract Accurate tuning of hyperparameters is crucial to ensure that models can generalise effectively across different settings. We construct a variational approximation to a hierarchical Bayes procedure, and derive upper bounds for the contraction rate of the variational posterior in an abstract setting. The theory is applied to various Gaussian process priors and variational classes, resulting in minimax optimal rates. Our theoretical results are accompanied with numerical analysis both on synthetic and real world data sets. Keywords: variational inference, Bayesian model selection, Gaussian processes, nonparametric regression, adaptation, posterior contraction rates 1 Introduction A core challenge in Bayesian statistics is scalability, i.e. the computation of the posterior for large sample sizes. Variational Bayes approximation is a standard approach to speed up inference. Variational posteriors are random probability measures that minimise the Kullback-Leibler divergence between a suitable class of distributions and the otherwise hard to compute posterior. Typically, the variational class of distributions over which the optimisation takes place does not contain the original posterior, hence the variational procedure can be viewed as a projection onto this class. The projected variational distribution then approximates the posterior. During the approximation procedure one inevitably loses information and hence it is important to characterize the accuracy of the approach. Despite the wide use of variational approximations, their theoretical underpinning started to emerge only recently, see for instance Alquier and Ridgway (2020); Yang et al. (2020); Zhang and Gao (2020a); Ray and Szab o (2022). In a Bayesian procedure, the choice of prior reflects the presumed properties of the unknown parameter. In comparison to regular parametric models, where in view of the Bernstein-von Mises theorem the posterior is asymptotically normal, the prior plays a crucial role in the asymptotic behaviour of the posterior. In fact, the large-sample behaviour of the posterior typically depends intricately on the choice of prior hyperparam-eters, so it is vital that these are tuned correctly. The two classical approaches are hierarchical and empirical Bayes methods.


Optimizing Product Provenance Verification using Data Valuation Methods

Yousuf, Raquib Bin, Just, Hoang Anh, Xu, Shengzhe, Mayer, Brian, Deklerck, Victor, Truszkowski, Jakub, Simeone, John C., Saunders, Jade, Lu, Chang-Tien, Jia, Ruoxi, Ramakrishnan, Naren

arXiv.org Artificial Intelligence

Determining and Determining and verifying product provenance remains a critical verifying product provenance is a challenge in global supply chains, challenge in global supply chains, particularly as geopolitical conflicts as geopolitics and the lure of "don't ask, don't tell" with respect to and shifting borders create new incentives for misrepresentation the ecological and social cost creates incentives for misrepresentation of commodities, such as hiding the origin of illegally harvested of commodities, such as hiding the origin of illegally harvested timber or agriculture grown on illegally cleared land. Stable Isotope timber or agriculture grown on illegally cleared land. Ratio Analysis (SIRA), combined with Gaussian process regressionbased Product identification and provenance verification of traded natural isoscapes, has emerged as a powerful tool for geographic resources have emerged as promising research areas, with origin verification. However, the effectiveness of these models is often various combinations of methods used based on the specific natural constrained by data scarcity and suboptimal dataset selection. In resource sector and the level of granularity of species identification this work, we introduce a novel data valuation framework designed and origin-provenance determination. For example, for wood and to enhance the selection and utilization of training data for machine forest products, determining species identification and geographic learning models applied in SIRA. By prioritizing high-informative harvest provenance requires utilizing multiple testing methods and samples, our approach improves model robustness and predictive tools [5, 8, 20].


Symmetric Kernels with Non-Symmetric Data: A Data-Agnostic Learnability Bound

Lavie, Itay, Ringel, Zohar

arXiv.org Machine Learning

Kernel ridge regression (KRR) and Gaussian processes (GPs) are fundamental tools in statistics and machine learning with recent applications to highly over-parameterized deep neural networks. The ability of these tools to learn a target function is directly related to the eigenvalues of their kernel sampled on the input data. Targets having support on higher eigenvalues are more learnable. While kernels are often highly symmetric objects, the data is often not. Thus kernel symmetry seems to have little to no bearing on the above eigenvalues or learnability, making spectral analysis on real-world data challenging. Here, we show that contrary to this common lure, one may use eigenvalues and eigenfunctions associated with highly idealized data-measures to bound learnability on realistic data. As a demonstration, we give a theoretical lower bound on the sample complexity of copying heads for kernels associated with generic transformers acting on natural language.


Incremental Variational Sparse Gaussian Process Regression

Neural Information Processing Systems

Recent work on scaling up Gaussian process regression (GPR) to large datasets has primarily focused on sparse GPR, which leverages a small set of basis functions to approximate the full Gaussian process during inference. However, the majority of these approaches are batch methods that operate on the entire training dataset at once, precluding the use of datasets that are streaming or too large to fit into memory. Although previous work has considered incrementally solving variational sparse GPR, most algorithms fail to update the basis functions and therefore perform suboptimally. We propose a novel incremental learning algorithm for variational sparse GPR based on stochastic mirror ascent of probability densities in reproducing kernel Hilbert space. This new formulation allows our algorithm to update basis functions online in accordance with the manifold structure of probability densities for fast convergence. We conduct several experiments and show that our proposed approach achieves better empirical performance in terms of prediction error than the recent state-of-the-art incremental solutions to variational sparse GPR.


Resource-Efficient Cooperative Online Scalar Field Mapping via Distributed Sparse Gaussian Process Regression

Ding, Tianyi, Zheng, Ronghao, Zhang, Senlin, Liu, Meiqin

arXiv.org Artificial Intelligence

Cooperative online scalar field mapping is an important task for multi-robot systems. Gaussian process regression is widely used to construct a map that represents spatial information with confidence intervals. However, it is difficult to handle cooperative online mapping tasks because of its high computation and communication costs. This letter proposes a resource-efficient cooperative online field mapping method via distributed sparse Gaussian process regression. A novel distributed online Gaussian process evaluation method is developed such that robots can cooperatively evaluate and find observations of sufficient global utility to reduce computation. The bounded errors of distributed aggregation results are guaranteed theoretically, and the performances of the proposed algorithms are validated by real online light field mapping experiments.