Goto

Collaborating Authors

 cifar-10







A Experimental setup

Neural Information Processing Systems

In this section, we detail the model architectures examined in the experiments and list all hyperpa-rameters used in the experiments. Both architectures consist of five stages, each consisting of a combination of convolutional layers with ReLU activation and max pooling layers. The base number of channels in consecutive stages for VGG architectures equals 64, 128, 256, 512, and 512. The subsequent stages are composed of residual blocks. In the case of ResNets, we report the results for the'conv2' layers.





Label Poisoning is All You Need

Neural Information Processing Systems

In a backdoor attack, an adversary injects corrupted data into a model's training dataset in order to gain control over its predictions on images with a specific attacker-defined trigger. A typical corrupted training example requires altering both the image, by applying the trigger, and the label. Models trained on clean images, therefore, were considered safe from backdoor attacks. However, in some common machine learning scenarios, the training labels are provided by potentially malicious third-parties. This includes crowd-sourced annotation and knowledge distillation. We, hence, investigate a fundamental question: can we launch a successful backdoor attack by only corrupting labels?