Online-LoRA: Task-free Online Continual Learning via Low Rank Adaptation
Wei, Xiwen, Li, Guihong, Marculescu, Radu
–arXiv.org Artificial Intelligence
Catastrophic forgetting is a significant challenge in online continual learning (OCL), especially for non-stationary data streams that do not have well-defined task boundaries. This challenge is exacerbated by the memory constraints and privacy concerns inherent in rehearsal buffers. To tackle catastrophic forgetting, in this paper, we introduce Online-LoRA, a novel framework for task-free OCL. Online-LoRA allows to finetune pre-trained Vision Transformer (ViT) models in real-time to address the limitations of rehearsal buffers and leverage pre-trained models' performance benefits. As the main contribution, our approach features a novel online weight regularization strategy to identify and consolidate important model parameters. Moreover, Online-LoRA leverages the training dynamics of loss values to enable the automatic recognition of the data distribution shifts. Extensive experiments across many task-free OCL scenarios and benchmark datasets (including CIFAR-100, ImageNet-R, ImageNet-S, CUB-200 and CORe50) demonstrate that Online-LoRA can be robustly adapted to various ViT architectures, while achieving better performance compared to SOTA methods. Our code will be publicly available at: https://github.com/Christina200/Online-LoRA-official.git.
arXiv.org Artificial Intelligence
Nov-8-2024
- Country:
- North America > United States (0.28)
- Genre:
- Instructional Material > Online (0.85)
- Research Report (1.00)
- Industry:
- Education > Educational Setting > Online (0.70)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning > Neural Networks (0.93)
- Representation & Reasoning (0.93)
- Vision (1.00)
- Data Science (0.87)
- Artificial Intelligence
- Information Technology