Adaptive Diffusion Denoised Smoothing : Certified Robustness via Randomized Smoothing with Differentially Private Guided Denoising Diffusion
Shpilevskiy, Frederick, Lyu, Saiyue, Dvijotham, Krishnamurthy Dj, Lécuyer, Mathias, Noël, Pierre-André
–arXiv.org Artificial Intelligence
We propose Adaptive Diffusion Denoised Smoothing, a method for certifying the predictions of a vision model against adversarial examples, while adapting to the input. Our key insight is to reinterpret a guided denoising diffusion model as a long sequence of adaptive Gaussian Differentially Private (GDP) mechanisms refining a pure noise sample into an image. We show that these adaptive mechanisms can be composed through a GDP privacy filter to analyze the end-to-end robustness of the guided denoising process, yielding a provable certification that extends the adaptive randomized smoothing analysis. We demonstrate that our design, under a specific guiding strategy, can improve both certified accuracy and standard accuracy on ImageNet for an $\ell_2$ threat model.
arXiv.org Artificial Intelligence
Jul-14-2025
- Country:
- North America > Canada > British Columbia (0.04)
- Genre:
- Research Report (0.40)
- Industry:
- Information Technology > Security & Privacy (0.68)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Vision (1.00)
- Information Technology > Artificial Intelligence