Multi-objective Bayesian Optimization using Pareto-frontier Entropy

arXiv.org Machine Learning

We propose Pareto-frontier entropy search (PFES) for multi-objective Bayesian optimization (MBO). Unlike the existing entropy search for MBO which considers the entropy of the input space, we define the entropy of Pareto-frontier in the output space. By using a sampled Pareto-frontier from the current model, PFES provides a simple formula for directly evaluating the entropy. Besides the usual MBO setting, in which all the objectives are simultaneously observed, we also consider the "decoupled" setting, in which the objective functions can be observed separately. PFES can easily derive an acquisition function for the decoupled setting through the entropy of the marginal density for each output variable. For the both settings, by conditioning on the sampled Pareto-frontier, dependence among different objectives arises in the entropy evaluation. PFES can incorporate this dependency into the acquisition function, while the existing information-based MBO employs an independent Gaussian approximation. Our numerical experiments show effectiveness of PFES through synthetic functions and real-world datasets from materials science.


Bayesian optimization in ab initio nuclear physics

arXiv.org Machine Learning

Theoretical models of the strong nuclear interaction contain unknown coupling constants (parameters) that must be determined using a pool of calibration data. In cases where the models are complex, leading to time consuming calculations, it is particularly challenging to systematically search the corresponding parameter domain for the best fit to the data. In this paper, we explore the prospect of applying Bayesian optimization to constrain the coupling constants in chiral effective field theory descriptions of the nuclear interaction. We find that Bayesian optimization performs rather well with low-dimensional parameter domains and foresee that it can be particularly useful for optimization of a smaller set of coupling constants. A specific example could be the determination of leading three-nucleon forces using data from finite nuclei or three-nucleon scattering experiments. Bayesian optimization in ab initio nuclear physics 2 1. Introduction Mathematical optimization plays a central role in natural science. Indeed, most theoretical predictions are preceded by a calibration stage whereby the parameters of the model are optimized to reproduce a selected set of calibration data. In nuclear physics, the coupling constants of any theory of the strong interaction between protons and neutrons (nucleons) must be determined from experimental data before one can attempt to solve the Schrödinger equation to make quantitative predictions of the properties of atomic nuclei. Typically, measured low-energy nucleon-nucleon (N N) cross sections and the properties of light nuclei with mass number A 4 have been used for calibrating the NN and three-nucleon (NNN) interaction sectors of the nuclear force, see e.g.


Towards Adaptive Training of Agent-based Sparring Partners for Fighter Pilots

arXiv.org Machine Learning

A key requirement for the current generation of artificial decision-makers is that they should adapt well to changes in unexpected situations. This paper addresses the situation in which an AI for aerial dog fighting, with tunable parameters that govern its behavior, must optimize behavior with respect to an objective function that is evaluated and learned through simulations. Bayesian optimization with a Gaussian Process surrogate is used as the method for investigating the objective function. One key benefit is that during optimization, the Gaussian Process learns a global estimate of the true objective function, with predicted outcomes and a statistical measure of confidence in areas that haven't been investigated yet. Having a model of the objective function is important for being able to understand possible outcomes in the decision space; for example this is crucial for training and providing feedback to human pilots. However, standard Bayesian optimization does not perform consistently or provide an accurate Gaussian Process surrogate function for highly volatile objective functions. We treat these problems by introducing a novel sampling technique called Hybrid Repeat/Multi-point Sampling. This technique gives the AI ability to learn optimum behaviors in a highly uncertain environment. More importantly, it not only improves the reliability of the optimization, but also creates a better model of the entire objective surface. With this improved model the agent is equipped to more accurately/efficiently predict performance in unexplored scenarios.


Hypervolume-based Multi-objective Bayesian Optimization with Student-t Processes

arXiv.org Machine Learning

Student-$t$ processes have recently been proposed as an appealing alternative non-parameteric function prior. They feature enhanced flexibility and predictive variance. In this work the use of Student-$t$ processes are explored for multi-objective Bayesian optimization. In particular, an analytical expression for the hypervolume-based probability of improvement is developed for independent Student-$t$ process priors of the objectives. Its effectiveness is shown on a multi-objective optimization problem which is known to be difficult with traditional Gaussian processes.


Lookahead Bayesian Optimization with Inequality Constraints

Neural Information Processing Systems

We consider the task of optimizing an objective function subject to inequality constraints when both the objective and the constraints are expensive to evaluate. Bayesian optimization (BO) is a popular way to tackle optimization problems with expensive objective function evaluations, but has mostly been applied to unconstrained problems. Several BO approaches have been proposed to address expensive constraints but are limited to greedy strategies maximizing immediate reward. To address this limitation, we propose a lookahead approach that selects the next evaluation in order to maximize the long-term feasible reduction of the objective function. We present numerical experiments demonstrating the performance improvements of such a lookahead approach compared to several greedy BO algorithms, including constrained expected improvement (EIC) and predictive entropy search with constraint (PESC).