One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks
Wu, Shutong, Chen, Sizhe, Xie, Cihang, Huang, Xiaolin
–arXiv.org Artificial Intelligence
Unlearnable examples (ULEs) aim to protect data from unauthorized usage for training DNNs. Such perturbations, however, are easy to eliminate by adversarial training and data augmentations. In this paper, we resolve this problem from a novel perspective by perturbing only one pixel in each image. Moreover, our produced One-Pixel Shortcut (OPS) could not be erased by adversarial training and strong augmentations. To generate OPS, we perturb in-class images at the same position to the same target value that could mostly and stably deviate from all the original images. Since such generation is only based on images, OPS needs significantly less computational cost than the previous methods using DNN generators. Based on OPS, we introduce an unlearnable dataset called CIFAR-10-S, which is indistinguishable from CIFAR-10 by humans but induces the trained model to extremely low accuracy. Even under adversarial training, a ResNet-18 trained on CIFAR-10-S has only 10.61% accuracy, compared to 83.02% by the existing error-minimizing method. Deep neural networks (DNNs) have successfully promoted the computer vision field in the past decade. As DNNs are scaling up unprecedentedly (Brock et al., 2018; Huang et al., 2019; Riquelme et al., 2021; Zhang et al., 2022), data becomes increasingly vital. For example, ImageNet (Russakovsky et al., 2015) fostered the development of AlexNet (Krizhevsky et al., 2017).
arXiv.org Artificial Intelligence
Feb-26-2023
- Country:
- North America > United States (0.28)
- Genre:
- Research Report (0.40)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: