Online-Within-Online Meta-Learning
Denevi, Giulia, Stamos, Dimitris, Ciliberto, Carlo, Pontil, Massimiliano
–Neural Information Processing Systems
We study the problem of learning a series of tasks in a fully online Meta-Learning setting. The goal is to exploit similarities among the tasks to incrementally adapt an inner online algorithm in order to incur a low averaged cumulative error over the tasks. We focus on a family of inner algorithms based on a parametrized variant of online Mirror Descent. The inner algorithm is incrementally adapted by an online Mirror Descent meta-algorithm using the corresponding within-task minimum regularized empirical risk as the meta-loss. In order to keep the process fully online, we approximate the meta-subgradients by the online inner algorithm.
Neural Information Processing Systems
Mar-19-2020, 02:02:08 GMT