Revisiting LLMs as Zero-Shot Time-Series Forecasters: Small Noise Can Break Large Models
Park, Junwoo, Lee, Hyuck, Lee, Dohyun, Gwak, Daehoon, Choo, Jaegul
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) have shown remarkable performance across diverse tasks without domain-specific training, fueling interest in their potential for time-series forecasting. While LLMs have shown potential in zero-shot forecasting through prompting alone, recent studies suggest that LLMs lack inherent effectiveness in forecasting. Given these conflicting findings, a rigorous validation is essential for drawing reliable conclusions. In this paper, we evaluate the effectiveness of LLMs as zero-shot forecasters compared to state-of-the-art domain-specific models. Our experiments show that LLM-based zero-shot forecasters often struggle to achieve high accuracy due to their sensitivity to noise, underperforming even simple domain-specific models. We have explored solutions to reduce LLMs' sensitivity to noise in the zero-shot setting, but improving their robustness remains a significant challenge. Our findings suggest that rather than emphasizing zero-shot forecasting, a more promising direction would be to focus on fine-tuning LLMs to better process numerical sequences. Our experimental code is available at https://github.com/junwoopark92/revisiting-LLMs-zeroshot-forecaster.
arXiv.org Artificial Intelligence
Jun-3-2025
- Country:
- Europe > Switzerland (0.04)
- North America > Trinidad and Tobago
- Genre:
- Research Report > New Finding (1.00)
- Technology: