A Better Bound Gives a Hundred Rounds: Enhanced Privacy Guarantees via $f$-Divergences
Asoodeh, Shahab, Liao, Jiachun, Calmon, Flavio P., Kosut, Oliver, Sankar, Lalitha
We derive the optimal differential privacy (DP) parameters of a mechanism that satisfies a given level of R enyi differential privacy (RDP). Our result is based on the joint range of two f -divergences that underlie the approximate and the R enyi variations of differential privacy. We apply our result to the moments accountant framework for characterizing privacy guarantees of stochastic gradient descent. When compared to the state-of-the-art, our bounds may lead to about 100 more stochastic gradient descent iterations for training deep learning models for the same privacy budget. Differential privacy (DP) [1] has become the de facto standard for privacy-preserving data analytics. Intuitively, a (potentially randomized) algorithm is said to be differentially private if its output does not vary significantly with small perturbations of the input. DP guarantees are usually cast in terms of properties of the information density [2] of the output of the algorithm conditioned on a given input--referred to as the privacy loss variable in the DP literature.
Jan-16-2020
- Country:
- North America > United States (0.28)
- Genre:
- Research Report (0.91)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: