frazier
- North America > United States > Arizona > Pima County > Tucson (0.14)
- North America > United States > New York > Tompkins County > Ithaca (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- (2 more...)
- North America > United States > New York > Tompkins County > Ithaca (0.04)
- North America > Canada (0.04)
- North America > United States (0.28)
- Europe > Switzerland > Basel-City > Basel (0.04)
- North America > Canada (0.04)
- Health & Medicine > Therapeutic Area (0.70)
- Government (0.67)
- Banking & Finance > Trading (0.46)
- Health & Medicine > Therapeutic Area > Immunology (0.69)
- Health & Medicine > Epidemiology (0.69)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.47)
- North America > United States > New York > Tompkins County > Ithaca (0.04)
- North America > Canada (0.04)
- North America > United States (0.28)
- Europe > Switzerland > Basel-City > Basel (0.04)
- North America > Canada (0.04)
- Health & Medicine > Therapeutic Area (0.70)
- Government (0.67)
- Banking & Finance > Trading (0.46)
- Health & Medicine > Therapeutic Area > Immunology (0.69)
- Health & Medicine > Epidemiology (0.69)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.47)
Fast Bayesian Optimization of Function Networks with Partial Evaluations
Buathong, Poompol, Frazier, Peter I.
Bayesian optimization of function networks (BOFN) is a framework for optimizing expensive-to-evaluate objective functions structured as networks, where some nodes' outputs serve as inputs for others. Many real-world applications, such as manufacturing and drug discovery, involve function networks with additional properties - nodes that can be evaluated independently and incur varying costs. A recent BOFN variant, p-KGFN, leverages this structure and enables cost-aware partial evaluations, selectively querying only a subset of nodes at each iteration. p-KGFN reduces the number of expensive objective function evaluations needed but has a large computational overhead: choosing where to evaluate requires optimizing a nested Monte Carlo-based acquisition function for each node in the network. To address this, we propose an accelerated p-KGFN algorithm that reduces computational overhead with only a modest loss in query efficiency. Key to our approach is generation of node-specific candidate inputs for each node in the network via one inexpensive global Monte Carlo simulation. Numerical experiments show that our method maintains competitive query efficiency while achieving up to a 16x speedup over the original p-KGFN algorithm.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Asia > Thailand (0.04)
- Asia > Russia > Siberian Federal District > Novosibirsk Oblast > Novosibirsk (0.04)
Multi-Information Source Optimization
Matthias Poloczek, Jialei Wang, Peter Frazier
We consider Bayesian methods for multi-information source optimization (MISO), in which we seek to optimize an expensive-to-evaluate black-box objective function while also accessing cheaper but biased and noisy approximations ("information sources"). We present a novel algorithm that outperforms the state of the art for this problem by using a Gaussian process covariance kernel better suited to MISO than those used by previous approaches, and an acquisition function based on a one-step optimality analysis supported by efficient parallelization. We also provide a novel technique to guarantee the asymptotic quality of the solution provided by this algorithm. Experimental evaluations demonstrate that this algorithm consistently finds designs of higher value at less cost than previous approaches.
- North America > United States > New York > Tompkins County > Ithaca (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > United States > Arizona > Pima County > Tucson (0.04)
- (2 more...)
Long-run Behaviour of Multi-fidelity Bayesian Optimisation
Dovonon, Gbetondji J-S, Zeitler, Jakob
Multi-fidelity Bayesian Optimisation (MFBO) has been shown to generally converge faster than single-fidelity Bayesian Optimisation (SFBO) (Poloczek et al. (2017)). Inspired by recent benchmark papers, we are investigating the long-run behaviour of MFBO, based on observations in the literature that it might under-perform in certain scenarios (Mikkola et al. (2023), Eggensperger et al. (2021)). An under-performance of MBFO in the long-run could significantly undermine its application to many research tasks, especially when we are not able to identify when the under-performance begins. We create a simple benchmark study, showcase empirical results and discuss scenarios and possible reasons of under-performance.