Diffusion Denoising as a Certified Defense against Clean-label Poisoning
Hong, Sanghyun, Carlini, Nicholas, Kurakin, Alexey
–arXiv.org Artificial Intelligence
We present a certified defense to clean-label poisoning attacks. These attacks work by injecting a small number of poisoning samples (e.g., 1%) that contain $p$-norm bounded adversarial perturbations into the training data to induce a targeted misclassification of a test-time input. Inspired by the adversarial robustness achieved by $denoised$ $smoothing$, we show how an off-the-shelf diffusion model can sanitize the tampered training data. We extensively test our defense against seven clean-label poisoning attacks and reduce their attack success to 0-16% with only a negligible drop in the test time accuracy. We compare our defense with existing countermeasures against clean-label poisoning, showing that the defense reduces the attack success the most and offers the best model utility. Our results highlight the need for future work on developing stronger clean-label attacks and using our certified yet practical defense as a strong baseline to evaluate these attacks.
arXiv.org Artificial Intelligence
Mar-18-2024
- Country:
- North America > United States (0.14)
- Genre:
- Research Report > New Finding (0.67)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: