Nearly Tight Black-Box Auditing of Differentially Private Machine Learning
–Neural Information Processing Systems
This paper presents an auditing procedure for the Differentially Private Stochastic Gradient Descent (DP-SGD) algorithm in the black-box threat model that is substantially tighter than prior work.The main intuition is to craft worst-case initial model parameters, as DP-SGD's privacy analysis is agnostic to the choice of the initial model parameters.For models trained on MNIST and CIFAR-10 at theoretical $\varepsilon=10.0$,
Neural Information Processing Systems
Dec-27-2025, 12:22:27 GMT
- Technology: