Online Continual Learning with Maximally Interfered Retrieval
–Neural Information Processing Systems
Continual learning, the setting where a learning agent is faced with a never ending stream of data, continues to be a great challenge for modern machine learning systems. In particular the online or "single-pass through the data" setting has gained attention recently as a natural setting that is difficult to tackle. Methods based on replay, either generative or from a stored memory, have been shown to be effective approaches for continual learning, matching or exceeding the state of the art in a number of standard benchmarks. These approaches typically rely on randomly selecting samples from the replay memory or from a generative model, which is suboptimal. In this work we consider a controlled sampling of memories for replay. We retrieve the samples which are most interfered, i.e. whose prediction will be most negatively impacted by the foreseen parameters update. We show a formulation for this sampling criterion in both the generative replay and the experience replay setting, producing consistent gains in performance and greatly reduced forgetting.
Neural Information Processing Systems
May-31-2025, 07:30:12 GMT
- Country:
- North America
- Canada (0.14)
- United States (0.14)
- North America
- Genre:
- Instructional Material > Online (0.41)
- Research Report (0.46)
- Technology: