Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and How
Arango, Sebastian Pineda, Ferreira, Fabio, Kadra, Arlind, Hutter, Frank, Grabocka, Josif
–arXiv.org Artificial Intelligence
With the ever-increasing number of pretrained models, machine learning practitioners are continuously faced with which pretrained model to use, and how to finetune it for a new dataset. In this paper, we propose a methodology that jointly searches for the optimal pretrained model and the hyperparameters for finetuning it. Our method transfers knowledge about the performance of many pretrained models with multiple hyperparameter configurations on a series of datasets. To this aim, we evaluated over 20k hyperparameter configurations for finetuning 24 pretrained image classification models on 87 datasets to generate a large-scale meta-dataset. We meta-learn a multi-fidelity performance predictor on the learning curves of this meta-dataset and use it for fast hyperparameter optimization on new datasets. We empirically demonstrate that our resulting approach can quickly select an accurate pretrained model for a new dataset together with its optimal hyperparameters.
arXiv.org Artificial Intelligence
Jul-2-2023
- Country:
- Asia
- Middle East > Jordan (0.04)
- Taiwan > Taiwan Province
- Taipei (0.04)
- Europe
- Austria (0.04)
- Germany > Baden-Württemberg
- Freiburg (0.05)
- Sweden > Stockholm
- Stockholm (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- North America > United States
- California > Los Angeles County
- Long Beach (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- California > Los Angeles County
- Asia
- Genre:
- Research Report > New Finding (0.46)
- Technology: