Goto

Collaborating Authors

 vincenzo lomonaco


Adaptive Hyperparameter Optimization for Continual Learning Scenarios

Semola, Rudy, Hurtado, Julio, Lomonaco, Vincenzo, Bacciu, Davide

arXiv.org Artificial Intelligence

Hyperparameter selection in continual learning scenarios is a challenging and underexplored aspect, especially in practical non-stationary environments. Traditional approaches, such as grid searches with held-out validation data from all tasks, are unrealistic for building accurate lifelong learning systems. This paper aims to explore the role of hyperparameter selection in continual learning and the necessity of continually and automatically tuning them according to the complexity of the task at hand. Hence, we propose leveraging the nature of sequence task learning to improve Hyperparameter Optimization efficiency. By using the functional analysis of variance-based techniques, we identify the most crucial hyperparameters that have an impact on performance. We demonstrate empirically that this approach, agnostic to continual scenarios and strategies, allows us to speed up hyperparameters optimization continually across tasks and exhibit robustness even in the face of varying sequential task orders. We believe that our findings can contribute to the advancement of continual learning methodologies towards more efficient, robust and adaptable models for real-world applications.


Avalanche: A PyTorch Library for Deep Continual Learning

Carta, Antonio, Pellegrini, Lorenzo, Cossu, Andrea, Hemati, Hamed, Lomonaco, Vincenzo

arXiv.org Artificial Intelligence

Continual learning is the problem of learning from a nonstationary stream of data, a fundamental issue for sustainable and efficient training of deep neural networks over time. Unfortunately, deep learning libraries only provide primitives for offline training, assuming that model's architecture and data are fixed. Avalanche is an open source library maintained by the ContinualAI non-profit organization that extends PyTorch by providing first-class support for dynamic architectures, streams of datasets, and incremental training and evaluation methods. Avalanche provides a large set of predefined benchmarks and training algorithms and it is easy to extend and modular while supporting a wide range of continual learning scenarios. Documentation is available at \url{https://avalanche.continualai.org}.


Architect, Regularize and Replay (ARR): a Flexible Hybrid Approach for Continual Learning

Lomonaco, Vincenzo, Pellegrini, Lorenzo, Graffieti, Gabriele, Maltoni, Davide

arXiv.org Artificial Intelligence

In recent years we have witnessed a renewed interest in machine learning methodologies, especially for deep representation learning, that could overcome basic i.i.d. assumptions and tackle non-stationary environments subject to various distributional shifts or sample selection biases. Within this context, several computational approaches based on architectural priors, regularizers and replay policies have been proposed with different degrees of success depending on the specific scenario in which they were developed and assessed. However, designing comprehensive hybrid solutions that can flexibly and generally be applied with tunable efficiency-effectiveness trade-offs still seems a distant goal. In this paper, we propose "Architect, Regularize and Replay" (ARR), an hybrid generalization of the renowned AR1 algorithm and its variants, that can achieve state-of-the-art results in classic scenarios (e.g. class-incremental learning) but also generalize to arbitrary data streams generated from real-world datasets such as CIFAR-100, CORe50 and ImageNet-1000.


Avalanche: an End-to-End Library for Continual Learning

Lomonaco, Vincenzo, Pellegrini, Lorenzo, Cossu, Andrea, Carta, Antonio, Graffieti, Gabriele, Hayes, Tyler L., De Lange, Matthias, Masana, Marc, Pomponi, Jary, van de Ven, Gido, Mundt, Martin, She, Qi, Cooper, Keiland, Forest, Jeremy, Belouadah, Eden, Calderara, Simone, Parisi, German I., Cuzzolin, Fabio, Tolias, Andreas, Scardapane, Simone, Antiga, Luca, Amhad, Subutai, Popescu, Adrian, Kanan, Christopher, van de Weijer, Joost, Tuytelaars, Tinne, Bacciu, Davide, Maltoni, Davide

arXiv.org Artificial Intelligence

Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning. Recently, we have witnessed a renewed and fast-growing interest in continual learning, especially within the deep learning community. However, algorithmic solutions are often difficult to re-implement, evaluate and port across different settings, where even results on standard benchmarks are hard to reproduce. In this work, we propose Avalanche, an open-source end-to-end library for continual learning research based on PyTorch. Avalanche is designed to provide a shared and collaborative codebase for fast prototyping, training, and reproducible evaluation Figure 1: Operational representation of Avalanche with its of continual learning algorithms.