User-Level Private Learning via Correlated Sampling

Neural Information Processing Systems 

Most works in learning with differential privacy (DP) have focused on the setting where each user has a single sample. In this work, we consider the setting where each user holds m samples and the privacy protection is enforced at the level of each user's data. We show that, in this setting, we may learn with a much fewer number of users. Specifically, we show that, as long as each user receives sufficiently many samples, we can learn any privately learnable class via an ( ",) - DP algorithm using only O (log(1 /) /") users. For " -DP algorithms, we show that we can learn using only O