A Differentially Private Stochastic Convex Optimization In this section, we provide analyses of our near-linear time algorithms for DP-SCO with near-optimal
–Neural Information Processing Systems
A.1 Supporting Lemmas In the phased algorithms for both convex minimization and convex-concave minimax problems, we By Definition 1 of ( ",) -differential privacy, the proof is complete. L can be used, e.g., see methods in [45, 20]. Here we only give a detailed proof of the regularized version. The proof of Lemma A.2 can be derived It is worth mentioning that Lemma A.3 does not require the Lipschitzness For the generalization error, we follow the standard results on stability and generalization. The proof of Theorem 3.3 that gives its guarantees is provided below. We first prove the privacy guarantee. Definition 2, Algorithm 1 is ( ",) -DP when setting =4 L p 2 log(2 .
Neural Information Processing Systems
Aug-19-2025, 14:02:19 GMT
- Technology: