Goto

Collaborating Authors

 saddle point problem








Efficient Algorithms for Smooth Minimax Optimization

Kiran K. Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh

Neural Information Processing Systems

In terms of g(, y), we consider two settings - strongly convex and nonconvex - and improve upon the best known rates in both. For strongly-convex g(, y), y, we propose a new direct optimal algorithm combining Mirror-Prox and Nesterov's AGD, and show that it can find global optimum in Õ (1/k




ImprovedAlgorithmsforConvex-Concave MinimaxOptimization

Neural Information Processing Systems

This paper studies minimax optimization problemsminxmaxyf(x,y), where f(x,y) is mx-strongly convex with respect tox, my-strongly concave with respect to y and (Lx,Lxy,Ly)-smooth. Zhang et al. [42] provided the following lower bound of the gradient complexity for any first-order method: Ω q