Robust and Faster Zeroth-Order Minimax Optimization: Complexity and Applications

Neural Information Processing Systems 

Many zeroth-order (ZO) optimization algorithms have been developed to solve nonconvex minimax problems in machine learning and computer vision areas. However, existing ZO minimax algorithms have high complexity and rely on some strict restrictive conditions for ZO estimations. To address these issues, we design a new unified ZO gradient descent extragradient ascent (ZO-GDEGA) algorithm, which reduces the overall complexity to \mathcal{O}(d\epsilon {-6}) to find an \epsilon -stationary point of the function \psi for nonconvex-concave (NC-C) problems, where d is the variable dimension. To the best of our knowledge, ZO-GDEGA is the first ZO algorithm with complexity guarantees to solve stochastic NC-C problems. Moreover, ZO-GDEGA requires weaker conditions on the ZO estimations and achieves more robust theoretical results.