Learning a Low-Rank Feature Representation: Achieving Better Trade-Off between Stability and Plasticity in Continual Learning
Liu, Zhenrong, Li, Yang, Gong, Yi, Wu, Yik-Chung
–arXiv.org Artificial Intelligence
In continual learning, networks confront a trade-off between stability and plasticity when trained on a sequence of tasks. To bolster plasticity without sacrificing stability, we propose a novel training algorithm called LRFR. This approach optimizes network parameters in the null space of the past tasks' feature representation matrix to guarantee the stability. Concurrently, we judiciously select only a subset of neurons in each layer of the network while training individual tasks to learn the past tasks' feature representation matrix in low-rank. This increases the null space dimension when designing network parameters for subsequent tasks, thereby enhancing the plasticity. Using CIFAR-100 and TinyImageNet as benchmark datasets for continual learning, the proposed approach consistently outperforms state-of-the-art methods.
arXiv.org Artificial Intelligence
Dec-14-2023
- Country:
- Asia > China
- Guangdong Province > Shenzhen (0.05)
- Hong Kong (0.05)
- Asia > China
- Genre:
- Research Report (0.70)
- Technology: