Randomized Neural Network with Adaptive Forward Regularization for Online Task-free Class Incremental Learning
Wang, Junda, Hu, Minghui, Li, Ning, Al-Ali, Abdulaziz, Suganthan, Ponnuthurai Nagaratnam
–arXiv.org Artificial Intelligence
Randomized Neural Network with Adaptive Forward Regularization for Online Task-free Class Incremental Learning Junda Wang, Minghui Hu, Ning Li, Abdulaziz Al-Ali, Ponnuthurai Nagarat-nam Suganthan To better acclimate OTCIL scenarios, forward knowledge is exploited to reduce regret and deliver efficient decision-making for ensemble Randomized NN learning in long task streams. This framework realizes one-pass incremental updates with less loss and superiority over ridge. Based on the framework, edR VFL-kF algorithm with adjustable forward regularization is derived, effectively avoiding previous replay and catastrophic forgetting. To overcome the intractable tuning and distribution drifting of -kF, we further propose edRVFL-kF-Bayes with ks synchronously self-adapted based on Bayesian learning in non-i.i.d OTCIL streams. Extensive experiments were conducted on image datasets and the results were analyzed from multiple views (including 6 metrics, dynamic behaviors, and ablation tests), revealing the outstanding performance of edRVFL-kF-Bayes and robustness even with a large PTM. Abstract Class incremental learning (CIL) requires an agent to learn distinct tasks consecutively with knowledge retention against forgetting. Problems impeding the practical applications of CIL methods are twofold: (1) non-i.i.d batch streams and no boundary prompts to update, known as the harsher online task-free CIL (OTCIL) scenario; (2) CIL methods suffer from memory loss in learning long task streams, as shown in Figure 1 (a). To achieve efficient decision-making and decrease cumulative regrets during the OTCIL process, a randomized neural network (Randomized NN) with forward regularization (-F) is proposed to resist forgetting and enhance learning performance. This work was supported by the National Natural Science Foundation of China under Grant 62273230 and 62203302, and the State Scholarship Fund of China Scholarship Council under Grant 202206230182. This paper was submitted to an Elsevier journal in Feb. 2025. Based on this framework, we derive the algorithm of the ensemble deep random vector functional link network (edR VFL) with adjustable forward regularization (-kF), where k mediates the intensity of the intervention. Moreover, to curb unstable penalties caused by non-i.i.d and mitigate intractable tuning of -kF in OTCIL, we improve it to the plug-and-play edR VFL-kF-Bayes, enabling all hard ks in multiple sub-learners to be self-adaptively determined based on Bayesian learning. Experiments were conducted on 2 image datasets including 6 metrics, dynamic performance, ablation tests, and compatibility, which distinctly validates the efficacy of our OTCIL frameworks with -kF-Bayes and -kF styles.
arXiv.org Artificial Intelligence
Oct-27-2025
- Country:
- Asia
- North America
- Canada > Ontario
- Toronto (0.04)
- United States (0.14)
- Canada > Ontario
- Genre:
- Research Report (1.00)
- Industry:
- Education > Educational Setting
- Online (0.46)
- Health & Medicine > Therapeutic Area
- Neurology (0.54)
- Education > Educational Setting