Generalizing Bayesian Optimization with Decision-theoretic Entropies Willie Neiswanger
–Neural Information Processing Systems
Bayesian optimization (BO) is a popular method for efficiently inferring optima of an expensive black-box function via a sequence of queries. Existing informationtheoretic BO procedures aim to make queries that most reduce the uncertainty about optima, where the uncertainty is captured by Shannon entropy. However, an optimal measure of uncertainty would, ideally, factor in how we intend to use the inferred quantity in some downstream procedure. In this paper, we instead consider a generalization of Shannon entropy from work in statistical decision theory [13, 39], which contains a broad class of uncertainty measures parameterized by a problem-specific loss function corresponding to a downstream task. We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures such as knowledge gradient, expected improvement, and entropy search. We then show how alternative choices for the loss yield a flexible family of acquisition functions that can be customized for use in novel optimization settings.
Neural Information Processing Systems
May-22-2025, 07:07:35 GMT
- Country:
- North America > United States
- California > Santa Clara County (0.14)
- Wisconsin (0.14)
- North America > United States
- Genre:
- Research Report (0.46)
- Industry:
- Government (0.93)
- Health & Medicine
- Pharmaceuticals & Biotechnology (0.69)
- Therapeutic Area
- Immunology (0.48)
- Infections and Infectious Diseases (0.46)
- Vaccines (0.48)
- Technology: