Littlestone Classes are Privately Online Learnable
–Neural Information Processing Systems
We consider the problem of online classification under a privacy constraint. In this setting a learner observes sequentially a stream of labelled examples (x_t, y_t), for 1 \leq t \leq T, and returns at each iteration t a hypothesis h_t which is used to predict the label of each new example x_t . The learner's performance is measured by her regret against a known hypothesis class \mathcal{H} . We require that the algorithm satisfies the following privacy constraint: the sequence h_1, \ldots, h_T of hypotheses output by the algorithm needs to be an (\epsilon, \delta) -differentially private function of the whole input sequence (x_1, y_1), \ldots, (x_T, y_T) .We provide the first non-trivial regret bound for the realizable setting. Specifically, we show that if the class \mathcal{H} has constant Littlestone dimension then, given an oblivious sequence of labelled examples, there is a private learner that makes in expectation at most O(\log T) mistakes -- comparable to the optimal mistake bound in the non-private case, up to a logarithmic factor.
Neural Information Processing Systems
Oct-10-2024, 16:41:23 GMT