A Communication-efficient Algorithm with Linear Convergence for Federated Minimax Learning

Neural Information Processing Systems 

SGDA under the ideal condition of no gradient noise, we show that generally it cannot guarantee exact convergence with constant stepsizes and thus suffers from slow rates of convergence.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found