BUZz: BUffer Zones for defending adversarial examples in image classification

Nguyen, Phuong Ha, Mahmood, Kaleel, Nguyen, Lam M., Nguyen, Thanh, van Dijk, Marten

arXiv.org Machine Learning 

BUZ Z: BU FFER Z ONES FOR DEFENDING ADVERSAR - IAL EXAMPLES IN IMAGE CLASSIFICATION Phuong Ha Nguyen 1, Kaleel Mahmood 1, Lam M. Nguyen 2, Thanh Nguyen 3, Marten van Dijk 1,4 1 Department of Electrical and Computer Engineering, University of Connecticut, USA 2 IBM Research, Thomas J. Watson Research Center, Y orktown Heights, USA 3 Iowa State University, USA 4 CWI Amsterdam, The Netherlands Equally contributed phuongha.ntu@gmail.com, Abstract We propose a novel defense against all existing gradient based adversarial attacks on deep neural networks for image classification problems. Our defense is based on a combination of deep neural networks and simple image transformations. While straight forward in implementation, this defense yields a unique security property which we term buffer zones. We argue that our defense based on buffer zones is secure against state-of-the-art black box attacks. We are able to achieve this security even when the adversary has access to the entire ...

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found