Not enough data to create a plot.
Try a different view from the menu above.
Matthias Poloczek
Multi-Information Source Optimization
Matthias Poloczek, Jialei Wang, Peter Frazier
We consider Bayesian methods for multi-information source optimization (MISO), in which we seek to optimize an expensive-to-evaluate black-box objective function while also accessing cheaper but biased and noisy approximations ("information sources"). We present a novel algorithm that outperforms the state of the art for this problem by using a Gaussian process covariance kernel better suited to MISO than those used by previous approaches, and an acquisition function based on a one-step optimality analysis supported by efficient parallelization. We also provide a novel technique to guarantee the asymptotic quality of the solution provided by this algorithm. Experimental evaluations demonstrate that this algorithm consistently finds designs of higher value at less cost than previous approaches.
Bayesian Optimization with Gradients
Jian Wu, Matthias Poloczek, Andrew G. Wilson, Peter Frazier
Scalable Global Optimization via Local Bayesian Optimization
David Eriksson, Michael Pearce, Jacob Gardner, Ryan D. Turner, Matthias Poloczek
Bayesian optimization has recently emerged as a popular method for the sampleefficient optimization of expensive black-box functions. However, the application to high-dimensional problems with several thousand observations remains challenging, and on difficult problems Bayesian optimization is often not competitive with other paradigms. In this paper we take the view that this is due to the implicit homogeneity of the global probabilistic models and an overemphasized exploration that results from global acquisition.
Scalable Global Optimization via Local Bayesian Optimization
David Eriksson, Michael Pearce, Jacob Gardner, Ryan D. Turner, Matthias Poloczek
Bayesian optimization has recently emerged as a popular method for the sampleefficient optimization of expensive black-box functions. However, the application to high-dimensional problems with several thousand observations remains challenging, and on difficult problems Bayesian optimization is often not competitive with other paradigms. In this paper we take the view that this is due to the implicit homogeneity of the global probabilistic models and an overemphasized exploration that results from global acquisition.
Multi-Information Source Optimization
Matthias Poloczek, Jialei Wang, Peter Frazier
Bayesian Optimization with Gradients
Jian Wu, Matthias Poloczek, Andrew G. Wilson, Peter Frazier
Bayesian optimization has been successful at global optimization of expensiveto-evaluate multimodal objective functions. However, unlike most optimization methods, Bayesian optimization typically does not use derivative information. In this paper we show how Bayesian optimization can exploit derivative information to find good solutions with fewer objective function evaluations. In particular, we develop a novel Bayesian optimization algorithm, the derivative-enabled knowledgegradient (d-KG), which is one-step Bayes-optimal, asymptotically consistent, and provides greater one-step value of information than in the derivative-free setting.