BUZz: BUffer Zones for defending adversarial examples in image classification
Nguyen, Phuong Ha, Mahmood, Kaleel, Nguyen, Lam M., Nguyen, Thanh, van Dijk, Marten
BUZ Z: BU FFER Z ONES FOR DEFENDING ADVERSAR - IAL EXAMPLES IN IMAGE CLASSIFICATION Phuong Ha Nguyen 1, Kaleel Mahmood 1, Lam M. Nguyen 2, Thanh Nguyen 3, Marten van Dijk 1,4 1 Department of Electrical and Computer Engineering, University of Connecticut, USA 2 IBM Research, Thomas J. Watson Research Center, Y orktown Heights, USA 3 Iowa State University, USA 4 CWI Amsterdam, The Netherlands Equally contributed phuongha.ntu@gmail.com, Abstract We propose a novel defense against all existing gradient based adversarial attacks on deep neural networks for image classification problems. Our defense is based on a combination of deep neural networks and simple image transformations. While straight forward in implementation, this defense yields a unique security property which we term buffer zones. We argue that our defense based on buffer zones is secure against state-of-the-art black box attacks. We are able to achieve this security even when the adversary has access to the entire ...
Oct-3-2019
- Country:
- Europe > Netherlands
- North Holland > Amsterdam (0.24)
- North America > United States
- Connecticut (0.24)
- Iowa (0.24)
- Europe > Netherlands
- Genre:
- Research Report (0.64)
- Summary/Review (0.67)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: