Goto

Collaborating Authors

 Ali Jadbabaie


Escaping Saddle Points in Constrained Optimization

Neural Information Processing Systems

In this paper, we study the problem of escaping from saddle points in smooth nonconvex optimization problems subject to a convex set C. We propose a generic framework that yields convergence to a second-order stationary point of the problem, if the convex set C is simple for a quadratic objective function. Specifically, our results hold if one can find a -approximate solution of a quadratic program subject to C in polynomial time, where <1is a positive constant that depends on the structure of the set C. Under this condition, we show that the sequence of iterates generated by the proposed framework reaches an (,)-second order stationary point (SOSP) in at most O(max{


Direct Runge-Kutta Discretization Achieves Acceleration

Neural Information Processing Systems

We study gradient-based optimization methods obtained by directly discretizing a second-order ordinary differential equation (ODE) related to the continuous limit of Nesterov's accelerated gradient method. When the function is smooth enough, we show that acceleration can be achieved by a stable discretization of this ODE using standard Runge-Kutta integrators.



Direct Runge-Kutta Discretization Achieves Acceleration

Neural Information Processing Systems

We study gradient-based optimization methods obtained by directly discretizing a second-order ordinary differential equation (ODE) related to the continuous limit of Nesterov's accelerated gradient method. When the function is smooth enough, we show that acceleration can be achieved by a stable discretization of this ODE using standard Runge-Kutta integrators.