la-maml
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > Canada > Quebec (0.04)
- Europe > Sweden > Stockholm > Stockholm (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > Canada > Quebec (0.04)
- Europe > Sweden > Stockholm > Stockholm (0.04)
- Education (0.49)
- Health & Medicine (0.46)
Review for NeurIPS paper: Look-ahead Meta Learning for Continual Learning
Weaknesses: I found some issues with the experiments, that I list in the following: Line 215 states that experiments refer to "task incremental settings". This term has a specific meaning in CL literature [3,4]: it usually means "multihead", i.e. task labels are given at inference time. I understand that this is the setting that is featured in section 5.2. Recent literature [1, 2, 5, 6] argues that this setting is trivial and that the Single-head/Class-Incremental setting (i.e. Providing Class-IL results could therefore be of great help to understand how LA-MAML performs in a more challenging setting.
Reproducibility Report: La-MAML: Look-ahead Meta Learning for Continual Learning
The Continual Learning (CL) problem involves performing well on a sequence of tasks under limited compute. Current algorithms in the domain are either slow, offline or sensitive to hyper-parameters. La-MAML, an optimization-based meta-learning algorithm claims to be better than other replay-based, prior-based and meta-learning based approaches. According to the MER paper [1], metrics to measure performance in the continual learning arena are Retained Accuracy (RA) and Backward Transfer-Interference (BTI). La-MAML claims to perform better in these values when compared to the SOTA in the domain. This is the main claim of the paper, which we shall be verifying in this report.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
La-MAML: Look-ahead Meta Learning for Continual Learning
Gupta, Gunshi, Yadav, Karmesh, Paull, Liam
The continual learning problem involves training models with limited capacity to perform well on a set of an unknown number of sequentially arriving tasks. While meta-learning shows great potential for reducing interference between old and new tasks, the current training procedures tend to be either slow or offline, and sensitive to many hyper-parameters. In this work, we propose Look-ahead MAML (La-MAML), a fast optimisation-based meta-learning algorithm for online-continual learning, aided by a small episodic memory. Our proposed modulation of per-parameter learning rates in our meta-learning update allows us to draw connections to prior work on hypergradients and meta-descent. This provides a more flexible and efficient way to mitigate catastrophic forgetting compared to conventional prior-based methods. La-MAML achieves performance superior to other replay-based, prior-based and meta-learning based approaches for continual learning on real-world visual classification benchmarks. Source code can be found here: https://github.com/montrealrobotics/La-MAML
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > Canada > Quebec (0.04)
- (2 more...)
- Research Report (0.50)
- Instructional Material (0.34)
- Health & Medicine (0.66)
- Education (0.48)