Robust Distributed Learning: Tight Error Bounds and Breakdown Point under Data Heterogeneity
–Neural Information Processing Systems
The theory underlying robust distributed learning algorithms, designed to resist adversarial machines, matches empirical observations when data is homogeneous. Under data heterogeneity however, which is the norm in practical scenarios, established lower bounds on the learning error are essentially vacuous and greatly mismatch empirical observations. This is because the heterogeneity model considered is too restrictive and does not cover basic learning tasks such as least-squares regression. We consider in this paper a more realistic heterogeneity model, namely (G,B) -gradient dissimilarity, and show that it covers a larger class of learning problems than existing theory. Notably, we show that the breakdown point under heterogeneity is lower than the classical fraction \frac{1}{2} .
Neural Information Processing Systems
Jan-19-2025, 15:00:15 GMT
- Technology: