Sample Compression for Continual Learning
Comeau, Jacob, Bazinet, Mathieu, Germain, Pascal, Subakan, Cem
–arXiv.org Artificial Intelligence
Continual learning algorithms aim to learn from a sequence of tasks, making the training distribution non-stationary. The majority of existing continual learning approaches in the literature rely on heuristics and do not provide learning guarantees for the continual learning setup. In this paper, we present a new method called 'Continual Pick-to-Learn' (CoP2L), which is able to retain the most representative samples for each task in an efficient way. The algorithm is adapted from the Pick-to-Learn algorithm, rooted in the sample compression theory. This allows us to provide high-confidence upper bounds on the generalization loss of the learned predictors, numerically computable after every update of the learned model. We also empirically show on several standard continual learning benchmarks that our algorithm is able to outperform standard experience replay, significantly mitigating catastrophic forgetting.
arXiv.org Artificial Intelligence
Mar-13-2025
- Country:
- Asia > Japan
- Honshū > Chūbu > Nagano Prefecture > Nagano (0.04)
- Europe
- Italy > Sicily
- Palermo (0.04)
- Spain > Basque Country
- Biscay Province > Bilbao (0.04)
- Italy > Sicily
- North America
- Canada > Quebec (0.04)
- United States
- California > Los Angeles County
- Long Beach (0.04)
- District of Columbia > Washington (0.04)
- Hawaii > Honolulu County
- Honolulu (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Oregon > Benton County
- Corvallis (0.04)
- Utah > Salt Lake County
- Salt Lake City (0.04)
- California > Los Angeles County
- Oceania > Australia
- New South Wales > Sydney (0.04)
- Asia > Japan
- Genre:
- Research Report (1.00)
- Industry:
- Education (0.46)
- Technology: