Bayesian Optimization with Gradients
Jian Wu, Matthias Poloczek, Andrew G. Wilson, Peter Frazier
–Neural Information Processing Systems
Bayesian optimization has been successful at global optimization of expensiveto-evaluate multimodal objective functions. However, unlike most optimization methods, Bayesian optimization typically does not use derivative information. In this paper we show how Bayesian optimization can exploit derivative information to find good solutions with fewer objective function evaluations. In particular, we develop a novel Bayesian optimization algorithm, the derivative-enabled knowledgegradient (d-KG), which is one-step Bayes-optimal, asymptotically consistent, and provides greater one-step value of information than in the derivative-free setting.
Neural Information Processing Systems
Oct-7-2024, 23:06:47 GMT