A Stochastic-Gradient-based Interior-Point Algorithm for Solving Smooth Bound-Constrained Optimization Problems

Curtis, Frank E., Kungurtsev, Vyacheslav, Robinson, Daniel P., Wang, Qi

arXiv.org Artificial Intelligence 

The interior-point methodology is one of the most effective approaches for solving continuous constrained optimization problems. In the context of (deterministic) derivative-based algorithmic strategies, interiorpoint methods offer convergence guarantees from remote starting points [11, 21, 27], and in both convex and nonconvex settings such algorithms can offer good worst-case iteration complexity properties [7, 21]. Furthermore, many of the most popular software packages for solving large-scale continuous optimization problems are based on interior-point methods [1, 11, 24, 25, 26, 27], and these have been used to great effect for many years. Despite the extensive literature on theoretical and practical benefits of interior-point methods in the context of (deterministic) derivative-based algorithms for solving (non)convex optimization problems, to the best of our knowledge there has not yet been one that has been shown rigorously to offer convergence guarantees when neither function nor derivative evaluations are available, and instead only stochastic gradient estimates are employed.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found