Improving Online Continual Learning Performance and Stability with Temporal Ensembles

Soutif--Cormerais, Albin, Carta, Antonio, Van de Weijer, Joost

arXiv.org Artificial Intelligence 

Neural networks are very effective when trained on large datasets for a large number of iterations. However, when they are trained on non-stationary streams of data and in an online fashion, their performance is reduced (1) by the online setup, which limits the availability of data, (2) due to catastrophic forgetting because of the non-stationary nature of the data. Furthermore, several recent works (Caccia et al., 2022; Lange et al., 2023) showed that replay methods used in continual learning suffer from the stability gap, encountered when evaluating the model continually (rather than only on task boundaries). In this article, we study the effect of model ensembling as a way to improve performance and stability in online continual learning. We notice that naively ensembling models coming from a variety of training tasks increases the performance in online continual learning considerably. Starting from this observation, and drawing inspirations from semi-supervised learning ensembling methods, we use a lightweight temporal ensemble that computes the exponential moving average of the weights (EMA) at test time, and show that it can drastically increase the performance and stability when used in combination with several methods from the literature. Learning neural networks with backpropagation has been proven capable of good generalization properties even when using overparametrized networks (Krizhevsky et al., 2017). However, these good learning properties mainly occur when the data is provided in an independant and identically distributed manner. When learning on a stream which distribution varies over time, neural networks are known to suffer from catastrophic forgetting (McCloskey & Cohen, 1989; Goodfellow et al., 2014; Kirkpatrick et al., 2017), and tend to forget knowledge acquired in previous learning tasks. The field of continual learning aims to address this problem. Generally, incremental learning separates the learning into distinct tasks (identified by a task-ID) that are encountered sequentially by the agent. A variety of settings have been introduced in continual learning in order to evaluate several aspects of the continual learning agent; taskincremental learning (De Lange et al., 2021; van de Ven & Tolias, 2018), and class-incremental learning (Masana et al., 2022; Belouadah et al., 2021) are among the most popular. In this paper, we focus on the more challenging class-incremental setting, where the learner does not have access to the task-ID at inference time.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found