Goto

Collaborating Authors

 function value


Regret Bounds for Gaussian-Process Optimization in Large Domains

Neural Information Processing Systems

The goal of this paper is to characterize Gaussian-Process optimization in the setting where the function domain is large relative to the number of admissible function evaluations, i.e., where it is impossible to find the global optimum. We provide upper bounds on the suboptimality (Bayesian simple regret) of the solution found by optimization strategies that are closely related to the widely used expected improvement (EI) and upper confidence bound (UCB) algorithms. These regret bounds illuminate the relationship between the number of evaluations, the domain size (i.e.


Standard Acquisition Is Sufficient for Asynchronous Bayesian Optimization

Riegler, Ben, Odgers, James, Fortuin, Vincent

arXiv.org Machine Learning

Asynchronous Bayesian optimization is widely used for gradient-free optimization in domains with independent parallel experiments and varying evaluation times. Existing methods posit that standard acquisitions lead to redundant and repeated queries, proposing complex solutions to enforce diversity in queries. Challenging this fundamental premise, we show that methods, like the Upper Confidence Bound, can in fact achieve theoretical guarantees essentially equivalent to those of sequential Thompson sampling. A conceptual analysis of asynchronous Bayesian optimization reveals that existing works neglect intermediate posterior updates, which we find to be generally sufficient to avoid redundant queries. Further investigation shows that by penalizing busy locations, diversity-enforcing methods can over-explore in asynchronous settings, reducing their performance. Our extensive experiments demonstrate that simple standard acquisition functions match or outperform purpose-built asynchronous methods across synthetic and real-world tasks.



11704817e347269b7254e744b5e22dac-Paper.pdf

Neural Information Processing Systems

Forexample, areal-time communications service maybeinterested in tuning the parameters of a control policy to adapt video quality in real time in order to maximize video quality and minimize latency [10, 17].


Appendix of Deep Stochastic Processes via Functional Markov Transition Operator A Proofs

Neural Information Processing Systems

A.1 Proof of proposition 4.1 (See page 4) A.2 Proof of proposition 4.2 (See page 4) MTOs in the form of Equation ( 9) are consistent and exchangeable. MTO s are consistent and exchangeable for the general form. These convex functions are then randomly shifted and rescaled to increase diversity. To circumvent memory issues, we use deep sets in this instance. Note that we do not share parameters among iterations.