A Little Is Enough: Circumventing Defenses For Distributed Learning

Baruch, Moran, Baruch, Gilad, Goldberg, Yoav

arXiv.org Machine Learning 

Distributed learning is central for large-scale training ofdeep-learning models. However, they are exposed to a security threat in which Byzantine participants can interrupt or control the learning process. Previous attack models and their corresponding defensesassume that the rogue participants are (a) omniscient (know the data of all other participants), and (b) introduce large change to the parameters. We show that small but wellcrafted changesare sufficient, leading to a novel non-omniscient attack on distributed learning that go undetected by all existing defenses. We demonstrate ourattack method works not only for preventing convergencebut also for repurposing of the model behavior ("backdooring"). We show that 20% of corrupt workers are sufficient to degrade aCIFAR10 model's accuracy by 50%, as well as to introduce backdoors into MNIST and CIFAR10 models without hurting their accuracy.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found