Reviews: Uncertainty-based Continual Learning with Adaptive Regularization
–Neural Information Processing Systems
This paper proposed uncertainty-regularized continue learning (UCL) to address the challenge of catastrophe forgetting of neural networks. In detail, the method improves over variational continual learning (VCL) by modifying the KL regularizer in mean-field Gaussian prior/posterior setting. The approach is mainly justified by intuition explanation rather than theoretical/mathematical arguments. Experiments are performed on supervised continual learning benchmarks (split and permuted MNIST), and the method shows dominating performance over previous baselines (VCL, SI, EWC, HAT). Reviewers include experts in continual learning.
Neural Information Processing Systems
Jan-22-2025, 12:59:17 GMT