Goto

Collaborating Authors

Budgeted Multi-Objective Optimization with a Focus on the Central Part of the Pareto Front - Extended Version

arXiv.org Machine Learning

Optimizing nonlinear systems involving expensive (computer) experiments with regard to conflicting objectives is a common challenge. When the number of experiments is severely restricted and/or when the number of objectives increases, uncovering the whole set of optimal solutions (the Pareto front) is out of reach, even for surrogate-based approaches. As non-compromising Pareto optimal solutions have usually little point in applications, this work restricts the search to relevant solutions that are close to the Pareto front center. The article starts by characterizing this center. Next, a Bayesian multi-objective optimization method for directing the search towards it is proposed. A criterion for detecting convergence to the center is described. If the criterion is triggered, a widened central part of the Pareto front is targeted such that sufficiently accurate convergence to it is forecasted within the remaining budget. Numerical experiments show how the resulting algorithm, C-EHI, better locates the central part of the Pareto front when compared to state-of-the-art Bayesian algorithms.


Fast Exact Computation of Expected HyperVolume Improvement

arXiv.org Machine Learning

In multi-objective Bayesian optimization and surrogate-based evolutionary algorithms, Expected HyperVolume Improvement (EHVI) is widely used as the acquisition function to guide the search approaching the Pareto front. This paper focuses on the exact calculation of EHVI given a nondominated set, for which the existing exact algorithms are complex and can be inefficient for problems with more than three objectives. Integrating with different decomposition algorithms, we propose a new method for calculating the integral in each decomposed high-dimensional box in constant time. We develop three new exact EHVI calculation algorithms based on three region decomposition methods. The first grid-based algorithm has a complexity of $O(m\cdot n^m)$ with $n$ denoting the size of the nondominated set and $m$ the number of objectives. The Walking Fish Group (WFG)-based algorithm has a worst-case complexity of $O(m\cdot 2^n)$ but has a better average performance. These two can be applied for problems with any $m$. The third CLM-based algorithm is only for $m=3$ and asymptotically optimal with complexity $\Theta(n\log{n})$. Performance comparison results show that all our three algorithms are at least twice faster than the state-of-the-art algorithms with the same decomposition methods. When $m>3$, our WFG-based algorithm can be over $10^2$ faster than the corresponding existing algorithms. Our algorithm is demonstrated in an example involving efficient multi-objective material design with Bayesian optimization.


Bayesian Optimization for Multi-objective Optimization and Multi-point Search

arXiv.org Machine Learning

Bayesian optimization is an effective method to efficiently optimize unknown objective functions with high evaluation costs. Traditional Bayesian optimization algorithms select one point per iteration for single objective function, whereas in recent years, Bayesian optimization for multi-objective optimization or multi-point search per iteration have been proposed. However, Bayesian optimization that can deal with them at the same time in non-heuristic way is not known at present. We propose a Bayesian optimization algorithm that can deal with multi-objective optimization and multi-point search at the same time. First, we define an acquisition function that considers both multi-objective and multi-point search problems. It is difficult to analytically maximize the acquisition function as the computational cost is prohibitive even when approximate calculations such as sampling approximation are performed; therefore, we propose an accurate and computationally efficient method for estimating gradient of the acquisition function, and develop an algorithm for Bayesian optimization with multi-objective and multi-point search. It is shown via numerical experiments that the performance of the proposed method is comparable or superior to those of heuristic methods.


Solution Subset Selection for Final Decision Making in Evolutionary Multi-Objective Optimization

arXiv.org Artificial Intelligence

In general, a multi-objective optimization problem does not have a single optimal solution but a set of Pareto optimal solutions, which forms the Pareto front in the objective space. Various evolutionary algorithms have been proposed to approximate the Pareto front using a pre-specified number of solutions. Hundreds of solutions are obtained by their single run. The selection of a single final solution from the obtained solutions is assumed to be done by a human decision maker. However, in many cases, the decision maker does not want to examine hundreds of solutions. Thus, it is needed to select a small subset of the obtained solutions. In this paper, we discuss subset selection from a viewpoint of the final decision making. First we briefly explain existing subset selection studies. Next we formulate an expected loss function for subset selection. We also show that the formulated function is the same as the IGD plus indicator. Then we report experimental results where the proposed approach is compared with other indicator-based subset selection methods.


A Bayesian approach to constrained single- and multi-objective optimization

arXiv.org Machine Learning

This article addresses the problem of derivative-free (single- or multi-objective) optimization subject to multiple inequality constraints. Both the objective and constraint functions are assumed to be smooth, non-linear and expensive to evaluate. As a consequence, the number of evaluations that can be used to carry out the optimization is very limited, as in complex industrial design optimization problems. The method we propose to overcome this difficulty has its roots in both the Bayesian and the multi-objective optimization literatures. More specifically, an extended domination rule is used to handle objectives and constraints in a unified way, and a corresponding expected hyper-volume improvement sampling criterion is proposed. This new criterion is naturally adapted to the search of a feasible point when none is available, and reduces to existing Bayesian sampling criteria---the classical Expected Improvement (EI) criterion and some of its constrained/multi-objective extensions---as soon as at least one feasible point is available. The calculation and optimization of the criterion are performed using Sequential Monte Carlo techniques. In particular, an algorithm similar to the subset simulation method, which is well known in the field of structural reliability, is used to estimate the criterion. The method, which we call BMOO (for Bayesian Multi-Objective Optimization), is compared to state-of-the-art algorithms for single- and multi-objective constrained optimization.