Stopping Bayesian Optimization with Probabilistic Regret Bounds
–Neural Information Processing Systems
Bayesian optimization is a popular framework for efficiently tackling black-box search problems. As a rule, these algorithms operate by iteratively choosing what to evaluate next until some predefined budget has been exhausted. We investigate replacing this de facto stopping rule with criteria based on the probability that a point satisfies a given set of conditions. We focus on the prototypical example of an (\epsilon, \delta) -criterion: stop when a solution has been found whose value is within \epsilon 0 of the optimum with probability at least 1 - \delta under the model. For Gaussian process priors, we show that Bayesian optimization satisfies this criterion under mild technical assumptions.
Neural Information Processing Systems
May-27-2025, 13:12:08 GMT
- Technology: