Goto

Collaborating Authors

 log 3









We thank all reviewers for their thoughtful feedback, which aided us in sharpening the presentation of our results

Neural Information Processing Systems

We thank all reviewers for their thoughtful feedback, which aided us in sharpening the presentation of our results. 's questions on bounds, we will present them more explicitly in the paper, as briefly described here. We refer R1 to corollary 2.1 Combining this upper bound with the lower bound above (right term in the max), Th2 is also tight w.r.t. 's questions: our contribution focuses solely on expressiveness aspects which draw the boundaries Note that the experiments in fig.1 We are glad for R2's implementation, but since we do not know the experiment details it is hard to Indeed Kaplan et al. employ hyper-parameters tunings (LR, initializations, batch size, etc) as


Appendix: Learning discrete distributions: user vs item-level privacy A Proof of Lemma 1 Note that ˆ p i = (N i + Z

Neural Information Processing Systems

The proof of Assouad's Lemma relies on Le Cam's method [Le Cam, 1973, Y u, 1997], which provide The first term follows from the classic Le Cam's lower bound (see [Y u, 1997, Lemma 1]). Next we need the group property of differential privacy. By [Acharya et al., 2020, Lemma 14 ], there exists a coupling In this section we provide learning lower bound for restricted estimators under pure differential privacy using Fano's method. The first term of (8) follows from the non-private Fano's inequality. Combining with (9) gives the desired lower bound.Proof of Theorem 3. To this end we need the following lemma from den Hollander [2012].