Goto

Collaborating Authors

 Jan Vondrak



Generalization Bounds for Uniformly Stable Algorithms

Neural Information Processing Systems

Uniform stability of a learning algorithm is a classical notion of algorithmic stability introduced to derive high-probability bounds on the generalization error (Bousquet and Elisseeff, 2002). Specifically, for a loss function with range bounded in [0, 1], the generalization error of a γ-uniformly stable learning algorithm on n samples is known to be within O((γ + 1/n) n log(1/δ)) of the empirical error with probability at least 1 δ. Unfortunately, this bound does not lead to meaningful generalization bounds in many common settings where γ 1/ n. At the same time the bound is known to be tight only when γ = O(1/n). We substantially improve generalization bounds for uniformly stable algorithms without making any additional assumptions. First, we show that the bound in this setting is O( (γ + 1/n) log(1/δ)) with probability at least 1 δ.