Goto

Collaborating Authors

 botorch



BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization

Neural Information Processing Systems

Bayesian optimization provides sample-efficient global optimization for a broad range of applications, including automatic machine learning, engineering, physics, and experimental design. We introduce BoTorch, a modern programming framework for Bayesian optimization that combines Monte-Carlo (MC) acquisition functions, a novel sample average approximation optimization approach, auto-differentiation, and variance reduction techniques. BoTorch's modular design facilitates flexible specification and optimization of probabilistic models written in PyTorch, simplifying implementation of new acquisition functions. Our approach is backed by novel theoretical convergence results and made practical by a distinctive algorithmic foundation that leverages fast predictive distributions, hardware acceleration, and deterministic optimization. We also propose a novel one-shot formulation of the Knowledge Gradient, enabled by a combination of our theoretical and software contributions. In experiments, we demonstrate the improved sample efficiency of BoTorch relative to other popular libraries.




BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization

Neural Information Processing Systems

Bayesian optimization provides sample-efficient global optimization for a broad range of applications, including automatic machine learning, engineering, physics, and experimental design. We introduce BoTorch, a modern programming framework for Bayesian optimization that combines Monte-Carlo (MC) acquisition functions, a novel sample average approximation optimization approach, auto-differentiation, and variance reduction techniques. BoTorch's modular design facilitates flexible specification and optimization of probabilistic models written in PyTorch, simplifying implementation of new acquisition functions. Our approach is backed by novel theoretical convergence results and made practical by a distinctive algorithmic foundation that leverages fast predictive distributions, hardware acceleration, and deterministic optimization. We also propose a novel "one-shot" formulation of the Knowledge Gradient, enabled by a combination of our theoretical and software contributions.


BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization

Neural Information Processing Systems

Bayesian optimization provides sample-efficient global optimization for a broad range of applications, including automatic machine learning, engineering, physics, and experimental design. We introduce BoTorch, a modern programming framework for Bayesian optimization that combines Monte-Carlo (MC) acquisition functions, a novel sample average approximation optimization approach, auto-differentiation, and variance reduction techniques. BoTorch's modular design facilitates flexible specification and optimization of probabilistic models written in PyTorch, simplifying implementation of new acquisition functions. Our approach is backed by novel theoretical convergence results and made practical by a distinctive algorithmic foundation that leverages fast predictive distributions, hardware acceleration, and deterministic optimization. We also propose a novel "one-shot" formulation of the Knowledge Gradient, enabled by a combination of our theoretical and software contributions.


Bayesian Optimisation for Sequential Experimental Design with Applications in Additive Manufacturing

Zhang, Mimi, Parnell, Andrew, Brabazon, Dermot, Benavoli, Alessio

arXiv.org Artificial Intelligence

Engineering designs are usually performed under strict budget constraints. Collecting a single datum from computer experiments such as computational fluid dynamics can potentially take weeks or months. Each datum obtained, whether from a simulation or a physical experiment, needs to be maximally informative of the goals we are trying to accomplish. It is thus crucial to decide where and how to collect the necessary data to learn most about the subject of study. Data-driven experimental design appears in many different contexts in chemistry and physics (e.g. Lam et al., 2018) where the design is an iterative process and the outcomes of previous experiments are exploited to make an informed selection of the next design to evaluate. Mathematically, it is often formulated as an optimization problem of a black-box function (that is, the input-output relation is complex and not analytically available). Bayesian optimization (BO) is a well-established technique for blackbox optimization and is primarily used in situations where (1) the objective function is complex and does not have a closed form, (2) no gradient information is available, and (3) function evaluations are expensive (see Frazier, 2018, for a tutorial). BO has been shown to be sample-efficient in many domains (e.g.


Multi-Fidelity Cost-Aware Bayesian Optimization

Foumani, Zahra Zanjani, Shishehbor, Mehdi, Yousefpour, Amin, Bostanabad, Ramin

arXiv.org Machine Learning

Bayesian optimization (BO) is increasingly employed in critical applications such as materials design and drug discovery. An increasingly popular strategy in BO is to forgo the sole reliance on high-fidelity data and instead use an ensemble of information sources which provide inexpensive low-fidelity data. The overall premise of this strategy is to reduce the overall sampling costs by querying inexpensive low-fidelity sources whose data are correlated with high-fidelity samples. Here, we propose a multi-fidelity cost-aware BO framework that dramatically outperforms the state-of-the-art technologies in terms of efficiency, consistency, and robustness. We demonstrate the advantages of our framework on analytic and engineering problems and argue that these benefits stem from our two main contributions: (1) we develop a novel acquisition function for multi-fidelity cost-aware BO that safeguards the convergence against the biases of low-fidelity data, and (2) we tailor a newly developed emulator for multi-fidelity BO which enables us to not only simultaneously learn from an ensemble of multi-fidelity datasets, but also identify the severely biased low-fidelity sources that should be excluded from BO.


The case for fully Bayesian optimisation in small-sample trials

Saikai, Yuji

arXiv.org Artificial Intelligence

While sample efficiency is the main motive for use of Bayesian optimisation when black-box functions are expensive to evaluate, the standard approach based on type II maximum likelihood (ML-II) may fail and result in disappointing performance in small-sample trials. The paper provides three compelling reasons to adopt fully Bayesian optimisation (FBO) as an alternative. First, failures of ML-II are more commonplace than implied by the existing studies using the contrived settings. Second, FBO is more robust than ML-II, and the price of robustness is almost trivial. Third, FBO has become simple to implement and fast enough to be practical. The paper supports the argument using relevant experiments, which reflect the current practice regarding models, algorithms, and software platforms. Since the benefits seem to outweigh the costs, researchers should consider adopting FBO for their applications so that they can guard against potential failures that end up wasting precious research resources.


BoTorch is a Framework for Bayesian Optimization in PyTorch

#artificialintelligence

Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. The open-source framework supports the implementation of low-level Bayesian optimization algorithms.