Goto

Collaborating Authors

 continual-maml


Online Fast Adaptation and Knowledge Accumulation (OSAKA): a New Approach to Continual Learning

Neural Information Processing Systems

Continual learning agents experience a stream of (related) tasks. The main challenge is that the agent must not forget previous tasks and also adapt to novel tasks in the stream. We are interested in the intersection of two recent continual-learning scenarios. In meta-continual learning, the model is pre-trained using meta-learning to minimize catastrophic forgetting of previous tasks. In continual-meta learning, the aim is to train agents for faster remembering of previous tasks through adaptation.


A A unifying framework Data Distribution Model for Fast Weights Slow Weights Updates Evaluation Supervised Learning S, Q C f

Neural Information Processing Systems

For readability, we omit OSAKA pre-training. Replay-based methods store representative samples from the past, either in their original form (e.g., rehearsal Most prior-based methods rely on task boundaries. Since non-stationary data distributions breaks the i.i.d assumption for The update is computed from a parametric combination of the gradient of the current and previous task. Despite that, meta-continual learning is actively researched [61, 6]. Bayesian change-point detection scheme to identify whether a task has changed.





Online Fast Adaptation and Knowledge Accumulation (OSAKA): a New Approach to Continual Learning

Neural Information Processing Systems

Continual learning agents experience a stream of (related) tasks. The main challenge is that the agent must not forget previous tasks and also adapt to novel tasks in the stream. We are interested in the intersection of two recent continual-learning scenarios. In meta-continual learning, the model is pre-trained using meta-learning to minimize catastrophic forgetting of previous tasks. In continual-meta learning, the aim is to train agents for faster remembering of previous tasks through adaptation.


Online Adaptation of Learned Vehicle Dynamics Model with Meta-Learning Approach

Tsuchiya, Yuki, Balch, Thomas, Drews, Paul, Rosman, Guy

arXiv.org Artificial Intelligence

We represent a vehicle dynamics model for autonomous driving near the limits of handling via a multi-layer neural network. Online adaptation is desirable in order to address unseen environments. However, the model needs to adapt to new environments without forgetting previously encountered ones. In this study, we apply Continual-MAML to overcome this difficulty. It enables the model to adapt to the previously encountered environments quickly and efficiently by starting updates from optimized initial parameters. We evaluate the impact of online model adaptation with respect to inference performance and impact on control performance of a model predictive path integral (MPPI) controller using the TRIKart platform. The neural network was pre-trained using driving data collected in our test environment, and experiments for online adaptation were executed on multiple different road conditions not contained in the training data. Empirical results show that the model using Continual-MAML outperforms the fixed model and the model using gradient descent in test set loss and online tracking performance of MPPI.