Test-Time Efficient Pretrained Model Portfolios for Time Series Forecasting

Kayaalp, Mert, Turkmen, Caner, Shchur, Oleksandr, Mercado, Pedro, Ansari, Abdul Fatir, Bohlke-Schneider, Michael, Wang, Bernie

arXiv.org Artificial Intelligence 

Is bigger always better for time series foundation models? With the question in mind, we explore an alternative to training a single, large monolithic model: building a portfolio of smaller, pretrained forecasting models. By applying ensembling or model selection over these portfolios, we achieve competitive performance on large-scale benchmarks using much fewer parameters. We explore strategies for designing such portfolios and find that collections of specialist models consistently outperform portfolios of independently trained generalists. Remarkably, we demonstrate that post-training a base model is a compute-effective approach for creating sufficiently diverse specialists, and provide evidences that ensembling and model selection are more compute-efficient than test-time fine-tuning.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found