Goto

Collaborating Authors

 qnehvi


11704817e347269b7254e744b5e22dac-Paper.pdf

Neural Information Processing Systems

Forexample, areal-time communications service maybeinterested in tuning the parameters of a control policy to adapt video quality in real time in order to maximize video quality and minimize latency [10, 17].


C qNEHVIunderDifferentComputationalApproaches C.1 DerivationofIEPformulationofqNEHVI From(4),theexpectednoisyjointhypervolumeimprovementisgivenby

Neural Information Processing Systems

Bayesian Optimization specifically aims toincrease sample efficiencyfor hard optimization algorithms, and consequently can help achieve better solutions without incurring large societal costs. In the 2-objective case, instead of padding the box decomposition, the Pareto frontier under each posterior sample can be padded instead by repeating a point on the Pareto Frontier such that the padded Pareto frontier under every posterior sample has exactlymaxt|Pt| points. Since the sequentialNEHVIis equivalent to theqNEHVIwith q = 1, we prove Theorem 1 for the generalq > 1 case. Recall from Section C.2, that using the method of common random numbers tofixthebasesamples, theIEPandCBDformulations areequivalent. Note that the box decomposition of the non-dominated space{S1,...,SKt} and the number of rectangles in the box decomposition depend onζt.


Parallel Bayesian Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement

Neural Information Processing Systems

Optimizing multiple competing black-box objectives is a challenging problem in many fields, including science, engineering, and machine learning. Multi-objective Bayesian optimization (MOBO) is a sample-efficient approach for identifying the optimal trade-offs between the objectives. However, many existing methods perform poorly when the observations are corrupted by noise. We propose a novel acquisition function, NEHVI, that overcomes this important practical limitation by applying a Bayesian treatment to the popular expected hypervolume improvement (EHVI) criterion and integrating over this uncertainty in the Pareto frontier. We argue that, even in the noiseless setting, generating multiple candidates in parallel is an incarnation of EHVI with uncertainty in the Pareto frontier and therefore can be addressed using the same underlying technique. Through this lens, we derive a natural parallel variant, qNEHVI, that reduces computational complexity of parallel EHVI from exponential to polynomial with respect to the batch size.


Parallel Bayesian Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement

Neural Information Processing Systems

Optimizing multiple competing black-box objectives is a challenging problem in many fields, including science, engineering, and machine learning. Multi-objective Bayesian optimization (MOBO) is a sample-efficient approach for identifying the optimal trade-offs between the objectives. However, many existing methods perform poorly when the observations are corrupted by noise. We propose a novel acquisition function, NEHVI, that overcomes this important practical limitation by applying a Bayesian treatment to the popular expected hypervolume improvement (EHVI) criterion and integrating over this uncertainty in the Pareto frontier. We argue that, even in the noiseless setting, generating multiple candidates in parallel is an incarnation of EHVI with uncertainty in the Pareto frontier and therefore can be addressed using the same underlying technique. Through this lens, we derive a natural parallel variant, qNEHVI, that reduces computational complexity of parallel EHVI from exponential to polynomial with respect to the batch size.


Robust Multi-Objective Bayesian Optimization Under Input Noise

Daulton, Samuel, Cakmak, Sait, Balandat, Maximilian, Osborne, Michael A., Zhou, Enlu, Bakshy, Eytan

arXiv.org Machine Learning

Bayesian optimization (BO) is a sample-efficient approach for tuning design parameters to optimize expensive-to-evaluate, black-box performance metrics. In many manufacturing processes, the design parameters are subject to random input noise, resulting in a product that is often less performant than expected. Although BO methods have been proposed for optimizing a single objective under input noise, no existing method addresses the practical scenario where there are multiple objectives that are sensitive to input perturbations. In this work, we propose the first multi-objective BO method that is robust to input noise. We formalize our goal as optimizing the multivariate value-at-risk (MVaR), a risk measure of the uncertain objectives. Since directly optimizing MVaR is computationally infeasible in many settings, we propose a scalable, theoretically-grounded approach for optimizing MVaR using random scalarizations. Empirically, we find that our approach significantly outperforms alternative methods and efficiently identifies optimal robust designs that will satisfy specifications across multiple metrics with high probability.


Parallel Bayesian Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement

Daulton, Samuel, Balandat, Maximilian, Bakshy, Eytan

arXiv.org Artificial Intelligence

Optimizing multiple competing black-box objectives is a challenging problem in many fields, including science, engineering, and machine learning. Multi-objective Bayesian optimization is a powerful approach for identifying the optimal trade-offs between the objectives with very few function evaluations. However, existing methods tend to perform poorly when observations are corrupted by noise, as they do not take into account uncertainty in the true Pareto frontier over the previously evaluated designs. We propose a novel acquisition function, NEHVI, that overcomes this important practical limitation by applying a Bayesian treatment to the popular expected hypervolume improvement criterion to integrate over this uncertainty in the Pareto frontier. We further argue that, even in the noiseless setting, the problem of generating multiple candidates in parallel reduces that of handling uncertainty in the Pareto frontier. Through this lens, we derive a natural parallel variant of NEHVI that can efficiently generate large batches of candidates. We provide a theoretical convergence guarantee for optimizing a Monte Carlo estimator of NEHVI using exact sample-path gradients. Empirically, we show that NEHVI achieves state-of-the-art performance in noisy and large-batch environments.