Byzantine Stochastic Gradient Descent Dan Alistarh
–Neural Information Processing Systems
This paper studies the problem of distributed stochastic optimization in an adversarial setting where, out of m machines which allegedly compute stochastic gradients every iteration, an -fraction are Byzantine, and may behave adversari-ally. Our main result is a variant of stochastic gradient descent (SGD) which finds " -approximate minimizers of convex functions in T = e O
Neural Information Processing Systems
Nov-20-2025, 23:36:56 GMT
- Country:
- Asia > Afghanistan
- Parwan Province > Charikar (0.04)
- Europe
- Austria (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- North America
- Canada (0.04)
- United States > Massachusetts
- Middlesex County > Cambridge (0.04)
- Asia > Afghanistan
- Genre:
- Research Report (0.66)
- Technology: