Adaptive Sampling Quasi-Newton Methods for Zeroth-Order Stochastic Optimization
Bollapragada, Raghu, Wild, Stefan M.
–arXiv.org Artificial Intelligence
Several methods have been proposed to solve such derivative-free stochastic optimization problems, and we refer the reader to [3, 38] for surveys of these methods. A popular class of these methods estimate the gradients using function values and employ standard gradient-based optimization methods using these estimators. Quasi-Newton methods are recognized as one of the most powerful methods for solving deterministic optimization problems. These methods build quadratic models of the objective information using only gradient information. Recently, researchers have been adapting these methods for stochastic settings when the gradient information is available. The empirical results in [15] indicate that a careful implementation of these methods can be efficient compared with the popular stochastic gradient methods. We adapt these methods to make them suitable for situations where the gradients are estimated using function values. We propose finite-difference derivative-free stochastic quasi-Newton methods for solving (1) by exploiting common random number (CRN) evaluations of f.
arXiv.org Artificial Intelligence
Sep-24-2021
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > United States
- Illinois > Cook County
- Lemont (0.04)
- Texas > Travis County
- Austin (0.14)
- Illinois > Cook County
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Industry:
- Government > Regional Government (0.46)