Goto

Collaborating Authors

 Vitaly Feldman






Locally Private Learning without Interaction Requires Separation

Neural Information Processing Systems

We consider learning under the constraint of local differential privacy (LDP). For many learning problems known efficient algorithms in this model require many rounds of communication between the server and the clients holding the data points. Yet multi-round protocols are prohibitively slow in practice due to network latency and, as a result, currently deployed large-scale systems are limited to a single round. Despite significant research interest, very little is known about which learning problems can be solved by such non-interactive systems. The only lower bound we are aware of is for PAC learning an artificial class of functions with respect to a uniform distribution [39].


Private Stochastic Convex Optimization with Optimal Rates

Neural Information Processing Systems

We study differentially private (DP) algorithms for stochastic convex optimization (SCO). In this problem the goal is to approximately minimize the population loss given i.i.d.



The Everlasting Database: Statistical Validity at a Fair Price

Neural Information Processing Systems

We propose a mechanism for answering an arbitrarily long sequence of potentially adaptive statistical queries, by charging a price for each query and using the proceeds to collect additional samples. Crucially, we guarantee statistical validity without any assumptions on how the queries are generated. We also ensure with high probability that the cost for M non-adaptive queries is O(log M), while the cost to a potentially adaptive user who makes M queries that do not depend on any others is O(p M).


Generalization Bounds for Uniformly Stable Algorithms

Neural Information Processing Systems

Uniform stability of a learning algorithm is a classical notion of algorithmic stability introduced to derive high-probability bounds on the generalization error (Bousquet and Elisseeff, 2002). Specifically, for a loss function with range bounded in [0, 1], the generalization error of a γ-uniformly stable learning algorithm on n samples is known to be within O((γ + 1/n) n log(1/δ)) of the empirical error with probability at least 1 δ. Unfortunately, this bound does not lead to meaningful generalization bounds in many common settings where γ 1/ n. At the same time the bound is known to be tight only when γ = O(1/n). We substantially improve generalization bounds for uniformly stable algorithms without making any additional assumptions. First, we show that the bound in this setting is O( (γ + 1/n) log(1/δ)) with probability at least 1 δ.