Stochastic Recursive Gradient Descent Ascent for Stochastic Nonconvex-Strongly-Concave Minimax Problems

Neural Information Processing Systems 

We focus on the stochastic setting, where we can only access an unbiased stochastic gradient estimate of f at each iteration. This formulation includes many machine learning applications as special cases such as robust optimization and adversary training. The most popular algorithm to solve this problem is stochastic gradient decent ascent, which requires \mathcal O(\kappa 3\varepsilon {-4}) stochastic gradient evaluations, where \kappa is the condition number. In this paper, we propose a novel method called Stochastic Recursive gradiEnt Descent Ascent (SREDA), which estimates gradients more efficiently using variance reduction. This method achieves the best known stochastic gradient complexity of {\mathcal O}(\kappa 3\varepsilon {-3}), and its dependency on \varepsilon is optimal for this problem.