Limitations on Variance-Reduction and Acceleration Schemes for Finite Sums Optimization
–Neural Information Processing Systems
We study the conditions under which one is able to efficiently apply variancereduction and acceleration schemes on finite sum optimization problems. First, we show that, perhaps surprisingly, the finite sum structure by itself, is not sufficient for obtaining a complexity bound of Õ((n + L/µ) ln(1/ɛ)) for L-smooth and µ-strongly convex individual functions - one must also know which individual function is being referred to by the oracle at each iteration. Next, we show that for a broad class of first-order and coordinate-descent finite sum algorithms (including, e.g., SDCA, SVRG, SAG), it is not possible to get an'accelerated' complexity bound of Õ((n+ nL/µ) ln(1/ɛ)), unless the strong convexity parameter is given explicitly. Lastly, we show that when this class of algorithms is used for minimizing L-smooth and convex finite sums, the iteration complexity is bounded from below by Ω(n + L/ɛ), assuming that (on average) the same update rule is used in any iteration, and Ω(n + nL/ɛ) otherwise.
Neural Information Processing Systems
Oct-4-2024, 00:32:15 GMT
- Country:
- Asia > Middle East
- Israel (0.04)
- North America > United States
- California > Los Angeles County > Long Beach (0.04)
- Asia > Middle East
- Technology: