Privacy of Noisy Stochastic Gradient Descent: More Iterations without More Privacy Loss
–Neural Information Processing Systems
A central issue in machine learning is how to train models on sensitive user data. Industry has widely adopted a simple algorithm: Stochastic Gradient Descent with noise (a.k.a. However, foundational theoretical questions about this algorithm's privacy loss remain open---even in the seemingly simple setting of smooth convex losses over a bounded domain. Our main result resolves these questions: for a large range of parameters, we characterize the differential privacy up to a constant. This result reveals that all previous analyses for this setting have the wrong qualitative behavior.
Neural Information Processing Systems
Oct-10-2024, 00:45:17 GMT
- Technology: