llmt ime
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.05)
- Europe > Middle East > Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- Asia > Middle East > Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- Asia > Taiwan (0.04)
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.05)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
Appendix T able of Contents
For all baseline methods, we use the MinMaxScaler from sklearn. The likelihood of generating the validation conditioned on the remaining training series is used to select the hyperparameters. We compare the performance of our GPT -3 predictor against popular time series models. GPT -3 continues to be competitive with or outperforms the baselines on all of the tasks, from in-context learning alone. GPT -3's performance is not due to memorization of the test data. Even if our evaluation datasets are present in the GPT -3 training data, it's unlikely that GPT -3's good performance is the result of memorization for at least two reasons a priori.
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.05)
- Europe > Middle East > Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- Asia > Middle East > Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- Asia > Taiwan (0.04)
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.05)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
Large Language Models Are Zero-Shot Time Series Forecasters
Gruver, Nate, Finzi, Marc, Qiu, Shikai, Wilson, Andrew Gordon
By encoding time series as a string of numerical digits, we can frame time series forecasting as next-token prediction in text. Developing this approach, we find that large language models (LLMs) such as GPT-3 and LLaMA-2 can surprisingly zero-shot extrapolate time series at a level comparable to or exceeding the performance of purpose-built time series models trained on the downstream tasks. To facilitate this performance, we propose procedures for effectively tokenizing time series data and converting discrete distributions over tokens into highly flexible densities over continuous values. We argue the success of LLMs for time series stems from their ability to naturally represent multimodal distributions, in conjunction with biases for simplicity, and repetition, which align with the salient features in many time series, such as repeated seasonal trends. We also show how LLMs can naturally handle missing data without imputation through non-numerical text, accommodate textual side information, and answer questions to help explain predictions. While we find that increasing model size generally improves performance on time series, we show GPT-4 can perform worse than GPT-3 because of how it tokenizes numbers, and poor uncertainty calibration, which is likely the result of alignment interventions such as RLHF.
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.05)
- Europe > Middle East > Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- Asia > Middle East > Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- (3 more...)