Reviews: Competitive Gradient Descent

Neural Information Processing Systems 

This paper deals with the computation of Nash Equilibria in competitive two-player games where the x player is minimizing a function f(x,y) and the y player is minimizing a function g(x,y) . Such problems arise in a wide variety of domains, notably in training GANs, and there has been much recent interest in developing algorithms for solving such problems. Gradient Descent Ascent (GDA) is a natural candidate algorithm for finding Nash Equilibrium, but it will provably oscillate or diverge even in simple settings. As such, many recent works have modified GDA or proposed different algorithms or schemes to guarantee convergence. This paper proposes a new algorithm called Competitive Gradient Descent (CGD), which updates each player's iterates by adding the Nash Equilibrium of a regularized bilinear approximation of the game at the current iterates. CGD requires Hessian-vector products, making it a second-order algorithm.