Limbo: A Fast and Flexible Library for Bayesian Optimization
Cully, Antoine, Chatzilygeroudis, Konstantinos, Allocati, Federico, Mouret, Jean-Baptiste
Bayesian Optimization (BO) is designed for the most challenging ones: when the gradient is unknown, evaluating a solution is costly, and evaluations are noisy. This is, for instance, the case when we want to find optimal parameters for a machine learning algorithm [Snoek et al., 2012], because testing a set of parameters can take hours, and because of the stochastic nature of many machine learning algorithms. Besides parameter tuning, Bayesian optimization recently attracted a lot of interest for direct policy search in robot learning [Lizotte et al., 2007, Wilson et al., 2014, Calandra et al., 2016] and online adaptation; for example, it was recently used to allow a legged robot to learn a new gait after a mechanical damage in about 10-15 trials (2 minutes) [Cully et al., 2015]. At its core, Bayesian optimization builds a probabilistic model of the function to be optimized (the reward/performance/cost function) using the samples that have already been evaluated [Shahriari et al., 2016]; usually, this model is a Gaussian process [Williams and Rasmussen, 2006]. To select the next sample to be evaluated, Bayesian optimization optimizes an acquisition function which leverages the model to predict the most promising areas of the search space.
Nov-22-2016
- Country:
- Europe
- France > Grand Est
- Meurthe-et-Moselle > Nancy (0.15)
- United Kingdom > England
- Greater London > London (0.04)
- France > Grand Est
- Europe
- Genre:
- Research Report (0.64)
- Technology: