L_2 -Uniform Stability of Randomized Learning Algorithms: Sharper Generalization Bounds and Confidence Boosting
–Neural Information Processing Systems
Exponential generalization bounds with near-optimal rates have recently been established for uniformly stable algorithms \citep{feldman2019high,bousquet2020sharper}. We seek to extend these best known high probability bounds from deterministic learning algorithms to the regime of randomized learning. One simple approach for achieving this goal is to define the stability for the expectation over the algorithm's randomness, which may result in sharper parameter but only leads to guarantees regarding the on-average generalization error. Another natural option is to consider the stability conditioned on the algorithm's randomness, which is way more stringent but may lead to generalization with high probability jointly over the randomness of sample and algorithm. The present paper addresses such a tension between these two alternatives and makes progress towards relaxing it inside a classic framework of confidence-boosting.
Neural Information Processing Systems
Jan-20-2025, 02:57:36 GMT
- Technology: