In-Context Fine-Tuning for Time-Series Foundation Models
Das, Abhimanyu, Faw, Matthew, Sen, Rajat, Zhou, Yichen
–arXiv.org Artificial Intelligence
Motivated by the recent success of time-series foundation models for zero-shot forecasting, we present a methodology for $\textit{in-context fine-tuning}$ of a time-series foundation model. In particular, we design a pretrained foundation model that can be prompted (at inference time) with multiple time-series examples, in order to forecast a target time-series into the future. Our foundation model is specifically trained to utilize examples from multiple related time-series in its context window (in addition to the history of the target time-series) to help it adapt to the specific distribution of the target domain at inference time. We show that such a foundation model that uses in-context examples at inference time can obtain much better performance on popular forecasting benchmarks compared to supervised deep learning methods, statistical models, as well as other time-series foundation models. Interestingly, our in-context fine-tuning approach even rivals the performance of a foundation model that is explicitly fine-tuned on the target domain.
arXiv.org Artificial Intelligence
Oct-31-2024
- Country:
- North America > United States > Texas (0.14)
- Genre:
- Research Report (1.00)
- Industry:
- Energy (0.55)
- Technology: