continual-maml
Online Fast Adaptation and Knowledge Accumulation (OSAKA): a New Approach to Continual Learning
Continual learning agents experience a stream of (related) tasks. The main challenge is that the agent must not forget previous tasks and also adapt to novel tasks in the stream. We are interested in the intersection of two recent continual-learning scenarios. In meta-continual learning, the model is pre-trained using meta-learning to minimize catastrophic forgetting of previous tasks. In continual-meta learning, the aim is to train agents for faster remembering of previous tasks through adaptation.
A A unifying framework Data Distribution Model for Fast Weights Slow Weights Updates Evaluation Supervised Learning S, Q C f
For readability, we omit OSAKA pre-training. Replay-based methods store representative samples from the past, either in their original form (e.g., rehearsal Most prior-based methods rely on task boundaries. Since non-stationary data distributions breaks the i.i.d assumption for The update is computed from a parametric combination of the gradient of the current and previous task. Despite that, meta-continual learning is actively researched [61, 6]. Bayesian change-point detection scheme to identify whether a task has changed.
- Asia > Japan > Honshū > Kansai > Osaka Prefecture > Osaka (0.45)
- North America > Canada > Quebec > Montreal (0.14)
- Europe > Germany > North Rhine-Westphalia > Upper Bavaria > Munich (0.04)
- Instructional Material (0.46)
- Research Report > New Finding (0.46)
- Education (0.93)
- Health & Medicine (0.68)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.68)
- Asia > Japan > Honshū > Kansai > Osaka Prefecture > Osaka (0.45)
- North America > Canada > Quebec > Montreal (0.14)
- Europe > Germany > North Rhine-Westphalia > Upper Bavaria > Munich (0.04)
- Instructional Material (0.46)
- Research Report > New Finding (0.46)
- Education (0.93)
- Health & Medicine (0.68)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.68)
Online Fast Adaptation and Knowledge Accumulation (OSAKA): a New Approach to Continual Learning
Continual learning agents experience a stream of (related) tasks. The main challenge is that the agent must not forget previous tasks and also adapt to novel tasks in the stream. We are interested in the intersection of two recent continual-learning scenarios. In meta-continual learning, the model is pre-trained using meta-learning to minimize catastrophic forgetting of previous tasks. In continual-meta learning, the aim is to train agents for faster remembering of previous tasks through adaptation.
Online Adaptation of Learned Vehicle Dynamics Model with Meta-Learning Approach
Tsuchiya, Yuki, Balch, Thomas, Drews, Paul, Rosman, Guy
We represent a vehicle dynamics model for autonomous driving near the limits of handling via a multi-layer neural network. Online adaptation is desirable in order to address unseen environments. However, the model needs to adapt to new environments without forgetting previously encountered ones. In this study, we apply Continual-MAML to overcome this difficulty. It enables the model to adapt to the previously encountered environments quickly and efficiently by starting updates from optimized initial parameters. We evaluate the impact of online model adaptation with respect to inference performance and impact on control performance of a model predictive path integral (MPPI) controller using the TRIKart platform. The neural network was pre-trained using driving data collected in our test environment, and experiments for online adaptation were executed on multiple different road conditions not contained in the training data. Empirical results show that the model using Continual-MAML outperforms the fixed model and the model using gradient descent in test set loss and online tracking performance of MPPI.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Asia > Japan > Honshū > Kansai > Osaka Prefecture > Osaka (0.04)
- (5 more...)
- Automobiles & Trucks (0.67)
- Transportation > Ground > Road (0.35)
- Information Technology > Robotics & Automation (0.35)