Read Between the Layers: Leveraging Intra-Layer Representations for Rehearsal-Free Continual Learning with Pre-Trained Models
Ahrens, Kyra, Lehmann, Hans Hergen, Lee, Jae Hee, Wermter, Stefan
–arXiv.org Artificial Intelligence
We address the Continual Learning (CL) problem, where a model has to learn a sequence of tasks from non-stationary distributions while preserving prior knowledge as it encounters new experiences. With the advancement of foundation models, CL research has shifted focus from the initial learning-from-scratch paradigm to the use of generic features from large-scale pre-training. However, existing approaches to CL with pre-trained models only focus on separating the class-specific features from the final representation layer and neglect the power of intermediate representations that capture low- and mid-level features naturally more invariant to domain shifts. In this work, we propose LayUP, a new class-prototype-based approach to continual learning that leverages second-order feature statistics from multiple intermediate layers of a pre-trained network. Our method is conceptually simple, does not require any replay buffer, and works out of the box with any foundation model. LayUP improves over the state-of-the-art on four of the seven class-incremental learning settings at a considerably reduced memory and computational footprint compared with the next best baseline. Our results demonstrate that fully exhausting the representational capacities of pre-trained models in CL goes far beyond their final embeddings.
arXiv.org Artificial Intelligence
Dec-13-2023
- Country:
- North America
- Canada > Ontario
- Toronto (0.14)
- United States (1.00)
- Canada > Ontario
- North America
- Genre:
- Overview (0.93)
- Research Report > New Finding (0.68)
- Industry:
- Education > Educational Setting > Online (0.46)
- Technology: