task boundary
- North America > Canada > Ontario > Toronto (0.14)
- Asia > Middle East > Jordan (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.93)
- (2 more...)
- North America > Canada > Quebec (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- North America > Canada (0.04)
- Europe > Italy > Emilia-Romagna > Modeno Province > Modena (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States (0.14)
- Oceania > Australia > New South Wales (0.04)
- North America > Canada (0.04)
Gradient based sample selection for online continual learning
A continual learning agent learns online with a non-stationary and never-ending stream of data. The key to such learning process is to overcome the catastrophic forgetting of previously seen data, which is a well known problem of neural networks. To prevent forgetting, a replay buffer is usually employed to store the previous data for the purpose of rehearsal. Previous work often depend on task boundary and i.i.d.
Randomized Neural Network with Adaptive Forward Regularization for Online Task-free Class Incremental Learning
Wang, Junda, Hu, Minghui, Li, Ning, Al-Ali, Abdulaziz, Suganthan, Ponnuthurai Nagaratnam
Randomized Neural Network with Adaptive Forward Regularization for Online Task-free Class Incremental Learning Junda Wang, Minghui Hu, Ning Li, Abdulaziz Al-Ali, Ponnuthurai Nagarat-nam Suganthan To better acclimate OTCIL scenarios, forward knowledge is exploited to reduce regret and deliver efficient decision-making for ensemble Randomized NN learning in long task streams. This framework realizes one-pass incremental updates with less loss and superiority over ridge. Based on the framework, edR VFL-kF algorithm with adjustable forward regularization is derived, effectively avoiding previous replay and catastrophic forgetting. To overcome the intractable tuning and distribution drifting of -kF, we further propose edRVFL-kF-Bayes with ks synchronously self-adapted based on Bayesian learning in non-i.i.d OTCIL streams. Extensive experiments were conducted on image datasets and the results were analyzed from multiple views (including 6 metrics, dynamic behaviors, and ablation tests), revealing the outstanding performance of edRVFL-kF-Bayes and robustness even with a large PTM. Abstract Class incremental learning (CIL) requires an agent to learn distinct tasks consecutively with knowledge retention against forgetting. Problems impeding the practical applications of CIL methods are twofold: (1) non-i.i.d batch streams and no boundary prompts to update, known as the harsher online task-free CIL (OTCIL) scenario; (2) CIL methods suffer from memory loss in learning long task streams, as shown in Figure 1 (a). To achieve efficient decision-making and decrease cumulative regrets during the OTCIL process, a randomized neural network (Randomized NN) with forward regularization (-F) is proposed to resist forgetting and enhance learning performance. This work was supported by the National Natural Science Foundation of China under Grant 62273230 and 62203302, and the State Scholarship Fund of China Scholarship Council under Grant 202206230182. This paper was submitted to an Elsevier journal in Feb. 2025. Based on this framework, we derive the algorithm of the ensemble deep random vector functional link network (edR VFL) with adjustable forward regularization (-kF), where k mediates the intensity of the intervention. Moreover, to curb unstable penalties caused by non-i.i.d and mitigate intractable tuning of -kF in OTCIL, we improve it to the plug-and-play edR VFL-kF-Bayes, enabling all hard ks in multiple sub-learners to be self-adaptively determined based on Bayesian learning. Experiments were conducted on 2 image datasets including 6 metrics, dynamic performance, ablation tests, and compatibility, which distinctly validates the efficacy of our OTCIL frameworks with -kF-Bayes and -kF styles.
- North America > United States (0.14)
- Asia > China > Shanghai > Shanghai (0.04)
- Asia > Singapore (0.04)
- (2 more...)
- Health & Medicine > Therapeutic Area > Neurology (0.54)
- Education > Educational Setting > Online (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.54)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.54)
- North America > Canada > Ontario > Toronto (0.14)
- Asia > Middle East > Jordan (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.93)
- (2 more...)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > Canada (0.04)
- Asia > Middle East > Jordan (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- (2 more...)