Plotting

 Kim, Hwanwoo


Bayesian Optimization with Noise-Free Observations: Improved Regret Bounds via Random Exploration

arXiv.org Artificial Intelligence

We introduce new algorithms rooted in scattered data approximation that rely on a random exploration step to ensure that the fill-distance of query points decays at a near-optimal rate. Our algorithms retain the ease of implementation of the classical GP-UCB algorithm and satisfy cumulative regret bounds that nearly match those conjectured in [Vak22], hence solving a COLT open problem. Furthermore, the new algorithms outperform GP-UCB and other popular Bayesian optimization strategies in several examples.


ReTaSA: A Nonparametric Functional Estimation Approach for Addressing Continuous Target Shift

arXiv.org Artificial Intelligence

The presence of distribution shifts poses a significant challenge for deploying modern machine learning models in real-world applications. This work focuses on the target shift problem in a regression setting (Zhang et al., 2013; Nguyen et al., 2016). More specifically, the target variable y (also known as the response variable), which is continuous, has different marginal distributions in the training source and testing domain, while the conditional distribution of features x given y remains the same. While most literature focuses on classification tasks with finite target space, the regression problem has an infinite dimensional target space, which makes many of the existing methods inapplicable. In this work, we show that the continuous target shift problem can be addressed by estimating the importance weight function from an ill-posed integral equation. We propose a nonparametric regularized approach named ReTaSA to solve the ill-posed integral equation and provide theoretical justification for the estimated importance weight function. The effectiveness of the proposed method has been demonstrated with extensive numerical studies on synthetic and real-world datasets.


Optimization on Manifolds via Graph Gaussian Processes

arXiv.org Machine Learning

Optimization problems on manifolds are ubiquitous in science and engineering. For instance, lowrank matrix completion and rotational alignment of 3D bodies can be formulated as optimization problems over spaces of matrices that are naturally endowed with manifold structures. These matrix manifolds belong to agreeable families [56] for which Riemannian gradients, geodesics, and other geometric quantities have closed-form expressions that facilitate the use of Riemannian optimization algorithms [19, 1, 9]. In contrast, this paper is motivated by optimization problems where the search space is a manifold that the practitioner can only access through a discrete point cloud representation, preventing direct use of Riemannian optimization algorithms. Moreover, the hidden manifold may not belong to an agreeable family, further hindering the use of classical methods. Illustrative examples where manifolds are represented by point cloud data include computer vision, robotics, and shape analysis of geometric morphometrics [33, 23, 25]. Additionally, across many applications in data science, high-dimensional point cloud data contains low-dimensional structure that can be modeled as a manifold for algorithmic design and theoretical analysis [14, 3, 27]. Motivated by these problems, this paper introduces a Bayesian optimization method with convergence guarantees to optimize an expensive-to-evaluate function on a point cloud of manifold samples.


A Variational Inference Approach to Inverse Problems with Gamma Hyperpriors

arXiv.org Machine Learning

Hierarchical models with gamma hyperpriors provide a flexible, sparse-promoting framework to bridge $L^1$ and $L^2$ regularizations in Bayesian formulations to inverse problems. Despite the Bayesian motivation for these models, existing methodologies are limited to \textit{maximum a posteriori} estimation. The potential to perform uncertainty quantification has not yet been realized. This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors. The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement. In addition, it lends itself naturally to conduct model selection for the choice of hyperparameters. We illustrate the performance of our methodology in several computed examples, including a deconvolution problem and sparse identification of dynamical systems from time series data.