Goto

Collaborating Authors

 blurred-dilated method


Blurred-Dilated Method for Adversarial Attacks

Neural Information Processing Systems

Deep neural networks (DNNs) are vulnerable to adversarial attacks, which lead to incorrect predictions. In black-box settings, transfer attacks can be conveniently used to generate adversarial examples. However, such examples tend to overfit the specific architecture and feature representations of the source model, resulting in poor attack performance against other target models. To overcome this drawback, we propose a novel model modification-based transfer attack: Blurred-Dilated method (BD) in this paper. In summary, BD works by reducing downsampling while introducing BlurPool and dilated convolutions in the source model.


Blurred-Dilated Method for Adversarial Attacks (Supplementary Material)

Neural Information Processing Systems

Table S1 presents the structural details of Blurred-Dilated ResNet-20 (BD RN20) used on the CIFAR-10 dataset. Table S2 shows the structural details of Blurred-Dilated ResNet-56 (BD RN56) used on the CIFAR-100 dataset. We conduct more ablation studies to examine the key modification choices of our method. Notations are the same as Figure S1. Notations are the same as Figure S1.


Blurred-Dilated Method for Adversarial Attacks

Neural Information Processing Systems

Deep neural networks (DNNs) are vulnerable to adversarial attacks, which lead to incorrect predictions. In black-box settings, transfer attacks can be conveniently used to generate adversarial examples. However, such examples tend to overfit the specific architecture and feature representations of the source model, resulting in poor attack performance against other target models. To overcome this drawback, we propose a novel model modification-based transfer attack: Blurred-Dilated method (BD) in this paper. In summary, BD works by reducing downsampling while introducing BlurPool and dilated convolutions in the source model.