Goto

Collaborating Authors

 online and differentially private learning


Smoothed Analysis of Online and Differentially Private Learning

Neural Information Processing Systems

Practical and pervasive needs for robustness and privacy in algorithms have inspired the design of online adversarial and differentially private learning algorithms. The primary quantity that characterizes learnability in these settings is the Littlestone dimension of the class of hypotheses [Ben-David et al., 2009, Alon et al., 2019]. This characterization is often interpreted as an impossibility result because classes such as linear thresholds and neural networks have infinite Littlestone dimension. In this paper, we apply the framework of smoothed analysis [Spielman and Teng, 2004], in which adversarially chosen inputs are perturbed slightly by nature. We show that fundamentally stronger regret and error guarantees are possible with smoothed adversaries than with worst-case adversaries. In particular, we obtain regret and privacy error bounds that depend only on the VC dimension and the bracketing number of a hypothesis class, and on the magnitudes of the perturbations.


Review for NeurIPS paper: Smoothed Analysis of Online and Differentially Private Learning

Neural Information Processing Systems

Summary and Contributions: This paper studies the very interesting question of moving beyond worst-case adversarial bounds for private/online learning. This is of particular relevance since the question of the equivalence between private - online learning was recently resolved by Bun et al. 20, and this is a logical next step in this line of research. Rather than consider a worst case adversaries, the authors consider adversaries constrained to play instances from alpha-smooth distributions (hence smoothed analysis). Although smooth adversaries against online or private learning have been studied in specific instances, this is the first work to consider this problem in full generality. Results: -They show that online learning against smooth adversaries can be characterized by the bracketing number of the hypothesis class - And that private learning against smoothed adversaries can be characterized by the VC dimension of the hypothesis class.


Smoothed Analysis of Online and Differentially Private Learning

Neural Information Processing Systems

Practical and pervasive needs for robustness and privacy in algorithms have inspired the design of online adversarial and differentially private learning algorithms. The primary quantity that characterizes learnability in these settings is the Littlestone dimension of the class of hypotheses [Ben-David et al., 2009, Alon et al., 2019]. This characterization is often interpreted as an impossibility result because classes such as linear thresholds and neural networks have infinite Littlestone dimension. In this paper, we apply the framework of smoothed analysis [Spielman and Teng, 2004], in which adversarially chosen inputs are perturbed slightly by nature. We show that fundamentally stronger regret and error guarantees are possible with smoothed adversaries than with worst-case adversaries.