Time-LLM: Time Series Forecasting by Reprogramming Large Language Models
Jin, Ming, Wang, Shiyu, Ma, Lintao, Chu, Zhixuan, Zhang, James Y., Shi, Xiaoming, Chen, Pin-Yu, Liang, Yuxuan, Li, Yuan-Fang, Pan, Shirui, Wen, Qingsong
–arXiv.org Artificial Intelligence
Time series forecasting holds significant importance in many real-world dynamic systems and has been extensively studied. Unlike natural language process (NLP) and computer vision (CV), where a single large model can tackle multiple tasks, models for time series forecasting are often specialized, necessitating distinct designs for different tasks and applications. While pre-trained foundation models have made impressive strides in NLP and CV, their development in time series domains has been constrained by data sparsity. Recent studies have revealed that large language models (LLMs) possess robust pattern recognition and reasoning abilities over complex sequences of tokens. However, the challenge remains in effectively aligning the modalities of time series data and natural language to leverage these capabilities. We begin by reprogramming the input time series with text prototypes before feeding it into the frozen LLM to align the two modalities. To augment the LLM's ability to reason with time series data, we propose Prompt-as-Prefix (PaP), which enriches the input context and directs the transformation of reprogrammed input patches. The transformed time series patches from the LLM are finally projected to obtain the forecasts. Time series forecasting is a critical capability across many real-world dynamic systems (Jin et al., 2023a), with applications ranging from demand planning (Leonard, 2001) and inventory optimization (Li et al., 2022) to energy load forecasting (Liu et al., 2023a) and climate modeling (Schneider & Dickinson, 1974). Each time series forecasting task typically requires extensive domain expertise and task-specific model designs. This stands in stark contrast to foundation language models like GPT-3 (Brown et al., 2020), GPT-4 (OpenAI, 2023), Llama (Touvron et al., 2023), inter alia, which can perform well on a diverse range of NLP tasks in a few-shot or even zero-shot setting. Pre-trained foundation models, such as large language models (LLMs), have driven rapid progress in computer vision (CV) and natural language processing (NLP).
arXiv.org Artificial Intelligence
Jan-29-2024
- Country:
- Asia > China (0.27)
- North America (0.45)
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine (0.68)
- Technology: