Branke, Juergen
BOP-Elites, a Bayesian Optimisation algorithm for Quality-Diversity search
Kent, Paul, Branke, Juergen
Quality Diversity (QD) algorithms such as MAP-Elites are a class of optimisation techniques that attempt to find a set of high-performing points from an objective function while enforcing behavioural diversity of the points over one or more interpretable, user chosen, feature functions. In this paper we propose the Bayesian Optimisation of Elites (BOP-Elites) algorithm that uses techniques from Bayesian Optimisation to explicitly model both quality and diversity with Gaussian Processes. By considering user defined regions of the feature space as 'niches' our task is to find the optimal solution in each niche. We propose a novel acquisition function to intelligently choose new points that provide the highest expected improvement to the ensemble problem of identifying the best solution in every niche. In this way each function evaluation enriches our modelling and provides insight to the whole problem, naturally balancing exploration and exploitation of the search space. The resulting algorithm is very effective in identifying the parts of the search space that belong to a niche in feature space, and finding the optimal solution in each niche. It is also significantly more sample efficient than simpler benchmark approaches. BOP-Elites goes further than existing QD algorithms by quantifying the uncertainty around our predictions and offering additional illumination of the search space through surrogate models.
Bayesian Optimization Allowing for Common Random Numbers
Pearce, Michael, Poloczek, Matthias, Branke, Juergen
Bayesian optimization is a powerful tool for expensive stochastic black-box optimization problems such as simulation-based optimization or machine learning hyperparameter tuning. Many stochastic objective functions implicitly require a random number seed as input. By explicitly reusing a seed a user can exploit common random numbers, comparing two or more inputs under the same randomly generated scenario, such as a common customer stream in a job shop problem, or the same random partition of training data into training and validation set for a machine learning algorithm. With the aim of finding an input with the best average performance over infinitely many seeds, we propose a novel Gaussian process model that jointly models both the output for each seed and the average. We then introduce the Knowledge gradient for Common Random Numbers that iteratively determines a combination of input and random seed to evaluate the objective and automatically trades off reusing old seeds and querying new seeds, thus overcoming the need to evaluate inputs in batches or measuring differences of pairs as suggested in previous methods. We investigate the Knowledge Gradient for Common Random Numbers both theoretically and empirically, finding it achieves significant performance improvements with only moderate added computational cost.