Local Steps Speed Up Local GD for Heterogeneous Distributed Logistic Regression
Crawshaw, Michael, Woodworth, Blake, Liu, Mingrui
–arXiv.org Artificial Intelligence
We analyze two variants of Local Gradient Descent applied to distributed logistic regression with heterogeneous, separable data and show convergence at the rate $O(1/KR)$ for $K$ local steps and sufficiently large $R$ communication rounds. In contrast, all existing convergence guarantees for Local GD applied to any problem are at least $\Omega(1/R)$, meaning they fail to show the benefit of local updates. The key to our improved guarantee is showing progress on the logistic regression objective when using a large stepsize $\eta \gg 1/K$, whereas prior analysis depends on $\eta \leq 1/K$.
arXiv.org Artificial Intelligence
Jan-23-2025