Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice
Lin, Hongzhou, Mairal, Julien, Harchaoui, Zaid
We introduce a generic scheme for accelerating gradient-based optimization methods in the sense of Nesterov. The approach, called Catalyst, builds upon the inexact acceler- ated proximal point algorithm for minimizing a convex objective function, and consists of approximately solving a sequence of well-chosen auxiliary problems, leading to faster convergence. One of the key to achieve acceleration in theory and in practice is to solve these sub-problems with appropriate accuracy by using the right stopping criterion and the right warm-start strategy. In this paper, we give practical guidelines to use Catalyst and present a comprehensive theoretical analysis of its global complexity. We show that Catalyst applies to a large class of algorithms, including gradient descent, block coordinate descent, incremental algorithms such as SAG, SAGA, SDCA, SVRG, Finito/MISO, and their proximal variants. For all of these methods, we provide acceleration and explicit sup- port for non-strongly convex objectives. We conclude with extensive experiments showing that acceleration is useful in practice, especially for ill-conditioned problems.
Dec-15-2017
- Country:
- Europe > France (0.14)
- North America > United States (0.14)
- Genre:
- Research Report > New Finding (0.93)
- Industry:
- Materials > Chemicals > Specialty Chemicals (1.00)
- Technology: