Generalization Bounds and Stopping Rules for Learning with Self-Selected Data
Rodemann, Julian, Bailie, James
Many learning paradigms self-select training data in light of previously learned parameters. Examples include active learning, semi-supervised learning, bandits, or boosting. Rodemann et al. (2024) unify them under the framework of "reciprocal learning". In this article, we address the question of how well these methods can generalize from their self-selected samples. In particular, we prove universal generalization bounds for reciprocal learning using covering numbers and Wasserstein ambiguity sets. Our results require no assumptions on the distribution of self-selected data, only verifiable conditions on the algorithms. We prove results for both convergent and finite iteration solutions. The latter are anytime valid, thereby giving rise to stopping rules for a practitioner seeking to guarantee the out-of-sample performance of their reciprocal learning algorithm. Finally, we illustrate our bounds and stopping rules for reciprocal learning's special case of semi-supervised learning.
May-13-2025
- Country:
- Asia
- China > Jiangsu Province
- Yancheng (0.04)
- Middle East > Jordan (0.04)
- China > Jiangsu Province
- Europe
- Germany
- Baden-Württemberg > Tübingen Region
- Tübingen (0.04)
- Bavaria > Upper Bavaria
- Munich (0.04)
- Baden-Württemberg > Tübingen Region
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Germany
- North America > United States
- California > Alameda County > Berkeley (0.04)
- Asia
- Genre:
- Research Report
- Experimental Study (0.67)
- New Finding (0.48)
- Research Report