Reviews: Acceleration through Optimistic No-Regret Dynamics

Neural Information Processing Systems 

This paper shows that Nesterov's accelerated gradient descent algorithms can be interpreted as computing a saddle point via online optimization algorithms. A convex optimization problem is transformed to be a minmax problem by the Fenchel dual, the solution of which is then approximated via online optimization algorithms. This paper can be a significant contribution to the optimization community. I would say that this is one of the most natural interpretations of Nesterov's accelerated gradient methods. The use of weighted regrets and (optimistic) FollowTheLeader (instead of follow the regularized leader) are a little bit artificial but acceptable.