Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent
Blanchard, Peva, Mhamdi, El Mahdi El, Guerraoui, Rachid, Stainer, Julien
–Neural Information Processing Systems
We study the resilience to Byzantine failures of distributed implementations of Stochastic Gradient Descent (SGD). So far, distributed machine learning frameworks have largely ignored the possibility of failures, especially arbitrary (i.e., Byzantine) ones. Causes of failures include software bugs, network asynchrony, biases in local datasets, as well as attackers trying to compromise the entire system. Assuming a set of $n$ workers, up to $f$ being Byzantine, we ask how resilient can SGD be, without limiting the dimension, nor the size of the parameter space. We first show that no gradient aggregation rule based on a linear combination of the vectors proposed by the workers (i.e, current approaches) tolerates a single Byzantine failure.
Neural Information Processing Systems
Feb-14-2020, 04:56:37 GMT
- Technology: