Goto

Collaborating Authors

 hypervolume improvement





Parallel Bayesian Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement

Neural Information Processing Systems

Optimizing multiple competing black-box objectives is a challenging problem in many fields, including science, engineering, and machine learning. Multi-objective Bayesian optimization (MOBO) is a sample-efficient approach for identifying the optimal trade-offs between the objectives. However, many existing methods perform poorly when the observations are corrupted by noise. We propose a novel acquisition function, NEHVI, that overcomes this important practical limitation by applying a Bayesian treatment to the popular expected hypervolume improvement (EHVI) criterion and integrating over this uncertainty in the Pareto frontier. We argue that, even in the noiseless setting, generating multiple candidates in parallel is an incarnation of EHVI with uncertainty in the Pareto frontier and therefore can be addressed using the same underlying technique. Through this lens, we derive a natural parallel variant, qNEHVI, that reduces computational complexity of parallel EHVI from exponential to polynomial with respect to the batch size.


Amortized Active Generation of Pareto Sets

Steinberg, Daniel M., Wijesinghe, Asiri, Oliveira, Rafael, Koniusz, Piotr, Ong, Cheng Soon, Bonilla, Edwin V.

arXiv.org Machine Learning

We introduce active generation of Pareto sets (A-GPS), a new framework for online discrete black-box multi-objective optimization (MOO). A-GPS learns a generative model of the Pareto set that supports a-posteriori conditioning on user preferences. The method employs a class probability estimator (CPE) to predict non-dominance relations and to condition the generative model toward high-performing regions of the search space. We also show that this non-dominance CPE implicitly estimates the probability of hypervolume improvement (PHVI). To incorporate subjective trade-offs, A-GPS introduces preference direction vectors that encode user-specified preferences in objective space. At each iteration, the model is updated using both Pareto membership and alignment with these preference directions, producing an amortized generative model capable of sampling across the Pareto front without retraining. The result is a simple yet powerful approach that achieves high-quality Pareto set approximations, avoids explicit hypervolume computation, and flexibly captures user preferences. Empirical results on synthetic benchmarks and protein design tasks demonstrate strong sample efficiency and effective preference incorporation.


qEHVI_CR

Eytan Bakshy

Neural Information Processing Systems

However, the sigmoid approximation can result in non-zero error. Recall from Section 3.3 that, given posterior samples, the time complexity on a single-threaded machine is For the purpose of stating our convergence results, we recall some concepts and notation from Balandat et al. We leave this to future work. We consider the setting from Balandat et al. As noted in Wilson et al.