Achieving Near-Optimal Convergence for Distributed Minimax Optimization with Adaptive Stepsizes
–Neural Information Processing Systems
Sharma et al. (2022) provide Y ang et al. (2022a) integrate Local SGDA with stochastic gradient estimators to eliminate the More recently, Zhang et al. (2023) adopt compressed momentum methods with Local SGD to increase the communication efficiency of the algorithm. For centralized nonconvex minimax problems, Y ang et al. (2022b) show that, even in deterministic settings, GDA-based methods necessitate the timescale separation of the stepsizes for primal and dual updates.
Neural Information Processing Systems
Feb-9-2026, 11:38:25 GMT
- Country:
- Asia
- China (0.04)
- Middle East > Jordan (0.04)
- Russia (0.04)
- Europe
- Russia (0.04)
- Switzerland > Zürich
- Zürich (0.14)
- North America > United States (0.14)
- Asia
- Genre:
- Research Report > Experimental Study (0.92)
- Industry:
- Government (0.45)
- Information Technology (0.45)
- Technology: