continuous meta-learning
Continuous Meta-Learning without Tasks
Meta-learning is a promising strategy for learning to efficiently learn using data gathered from a distribution of tasks. However, the meta-learning literature thus far has focused on the task segmented setting, where at train-time, offline data is assumed to be split according to the underlying task, and at test-time, the algorithms are optimized to learn in a single task. In this work, we enable the application of generic meta-learning algorithms to settings where this task segmentation is unavailable, such as continual online learning with unsegmented time series data.
Review for NeurIPS paper: Continuous Meta-Learning without Tasks
Additional Feedback: It would help if you could do more to motivate when data of the type MOCA is designed for would occur. It feels a little like an algorithm in search of a problem. When the task switches to a new task, does it ever switch back to a previous task, i.e., the new task is the same as one of the previous tasks? This happens at test time, but can it happen during training? If it can, can the algorithm detect that it has returned to a previously seen task?
Review for NeurIPS paper: Continuous Meta-Learning without Tasks
This paper addresses a continual meta-learning using unsegmented supervised tasks, which is quite a challenging and timely topic. All reviewers agree that the proposed method, referred to as MOCA, is a sound solution. The integration of Bayesian change point detection with meta-learning is an interesting idea. During the discussion period, one reviewer raised his/her score by one.
Continuous Meta-Learning without Tasks
Meta-learning is a promising strategy for learning to efficiently learn using data gathered from a distribution of tasks. However, the meta-learning literature thus far has focused on the task segmented setting, where at train-time, offline data is assumed to be split according to the underlying task, and at test-time, the algorithms are optimized to learn in a single task. In this work, we enable the application of generic meta-learning algorithms to settings where this task segmentation is unavailable, such as continual online learning with unsegmented time series data. The framework allows both training and testing directly on time series data without segmenting it into discrete tasks. We demonstrate the utility of this approach on three nonlinear meta-regression benchmarks as well as two meta-image-classification benchmarks.