Bayesian Optimization with Gradients
Wu, Jian, Poloczek, Matthias, Wilson, Andrew G., Frazier, Peter
–Neural Information Processing Systems
Bayesian optimization has shown success in global optimization of expensive-to-evaluate multimodal objective functions. However, unlike most optimization methods, Bayesian optimization typically does not use derivative information. In this paper we show how Bayesian optimization can exploit derivative information to find good solutions with fewer objective function evaluations. In particular, we develop a novel Bayesian optimization algorithm, the derivative-enabled knowledge-gradient (dKG), which is one-step Bayes-optimal, asymptotically consistent, and provides greater one-step value of information than in the derivative-free setting. dKG accommodates noisy and incomplete derivative information, comes in both sequential and batch forms, and can optionally reduce the computational cost of inference through automatically selected retention of a single directional derivative. We also compute the dKG acquisition function and its gradient using a novel fast discretization-free technique. We show dKG provides state-of-the-art performance compared to a wide range of optimization procedures with and without gradients, on benchmarks including logistic regression, deep learning, kernel learning, and k-nearest neighbors.
Neural Information Processing Systems
Dec-31-2017
- Country:
- North America
- Canada > Alberta (0.14)
- United States > Oregon (0.14)
- North America
- Genre:
- Research Report
- Experimental Study (0.35)
- New Finding (0.35)
- Research Report
- Technology: