Guarantees on learning depth-2 neural networks under a data-poisoning attack

Mukherjee, Anirbit, Muthukumar, Ramchandran

arXiv.org Machine Learning 

In recent times many state-of-the-art machine learning models have been shown to be fragile to adversarial attacks. In this work we attempt to build our theoretical understanding of adversarially robust learning with neural nets. We demonstrate a specific class of neural networks of finite size and a non-gradient stochastic algorithm which tries to recover the weights of the net generating the realizable true labels in the presence of an oracle doing a bounded amount of malicious additive distortion to the labels. We prove (nearly optimal) tradeoffs among the magnitude of the adversarial attack, the accuracy and the confidence achieved by the proposed algorithm. The seminal paper [35] was among the first to highlight a key vulnerability of state-of-the-art network architectures like GoogLeNet, that adding small imperceptible adversarial noise to test data can dramatically impact the performance of the network.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found