Antiga, Luca
NeurIPS 2023 LLM Efficiency Fine-tuning Competition
Saroufim, Mark, Perlitz, Yotam, Choshen, Leshem, Antiga, Luca, Bowyer, Greg, Puhrsch, Christian, Guessous, Driss, Rao, Supriya, Chauhan, Geeta, Kumar, Ashvini, Kumar, Jindal Pawan, Parikh, Rajpoot Ankur, Isaacson, Joe, Yang, Weiwei
Our analysis of the NeurIPS 2023 large language model (LLM) fine-tuning competition revealed the following trend: top-performing models exhibit significant overfitting on benchmark datasets, mirroring the broader issue of benchmark overfitting on popular leaderboards and that data curation is essential in order to get a high performing LLM. The competition, which consisted of two stages - an open evaluation stage with publicly available tasks and a closed evaluation stage with unseen tasks - allowed us to assess the generalizability of fine-tuned LLMs. Our results highlight the limitations of current benchmark-based evaluation schemes for generative models and demonstrate the need for more robust evaluation methods. Notably, the winning submissions utilized standard open-source libraries and focused primarily on data curation. To facilitate further research and promote reproducibility, we release all competition entries, Docker files, and evaluation infrastructure, providing a valuable resource for the community to explore fine-tuning, overfitting, and reproducibility in LLMs..
Avalanche: an End-to-End Library for Continual Learning
Lomonaco, Vincenzo, Pellegrini, Lorenzo, Cossu, Andrea, Carta, Antonio, Graffieti, Gabriele, Hayes, Tyler L., De Lange, Matthias, Masana, Marc, Pomponi, Jary, van de Ven, Gido, Mundt, Martin, She, Qi, Cooper, Keiland, Forest, Jeremy, Belouadah, Eden, Calderara, Simone, Parisi, German I., Cuzzolin, Fabio, Tolias, Andreas, Scardapane, Simone, Antiga, Luca, Amhad, Subutai, Popescu, Adrian, Kanan, Christopher, van de Weijer, Joost, Tuytelaars, Tinne, Bacciu, Davide, Maltoni, Davide
Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning. Recently, we have witnessed a renewed and fast-growing interest in continual learning, especially within the deep learning community. However, algorithmic solutions are often difficult to re-implement, evaluate and port across different settings, where even results on standard benchmarks are hard to reproduce. In this work, we propose Avalanche, an open-source end-to-end library for continual learning research based on PyTorch. Avalanche is designed to provide a shared and collaborative codebase for fast prototyping, training, and reproducible evaluation Figure 1: Operational representation of Avalanche with its of continual learning algorithms.