Adaptive Variance-Penalized Continual Learning with Fisher Regularization

Sarkar, Krisanu

arXiv.org Artificial Intelligence 

Abstract-- The persistent challenge of catastrophic forgetting in neural networks has motivated extensive research in continual learning [1]. This work presents a novel continual learning framework that integrates Fisher-weighted asymmetric regularization of parameter variances within a variational learning paradigm. Comprehensive evaluations on standard continual learning benchmarks including SplitMNIST, PermutedMNIST, and SplitFash-ionMNIST demonstrate substantial improvements over existing approaches such as Variational Continual Learning [2] and Elastic Weight Consolidation [3]. The asymmetric variance penalty mechanism proves particularly effective in maintaining knowledge across sequential tasks while improving model accuracy. Experimental results show our approach not only boosts immediate task performance but also significantly mitigates knowledge degradation over time, effectively addressing the fundamental challenge of catastrophic forgetting in neural networks [4].