Shadowing Properties of Optimization Algorithms

Antonio Orvieto, Aurelien Lucchi

Neural Information Processing Systems 

Analyzing the convergence properties of these algorithms can be complex, especially for NAG whose convergence proof relies on algebraic tricks that reveal little detail about the acceleration phenomenon, i.e. the celebrated optimality of NAG in convex smooth optimization. Instead, an alternative approach is to view these methods as numerical integrators of some ordinary differential equations (ODEs).

Similar Docs  Excel Report  more

TitleSimilaritySource
None found