Cisco Time Series Model Technical Report
Gou, Liang, Khare, Archit, Pabolu, Praneet, Patel, Prachi, Ross, Joseph, Shen, Hercy, Yuhan, null, Song, null, Sun, Jingze, Curtis, Kristal, Dharnidharka, Vedant, Mathur, Abhinav, Yang, Hao
Modern LLMs are capable of learning complex statistical properties of language from a vast corpus of text. Rather than being trained to emulate a particular style or perform a particular task, they learn structure across diverse examples of token sequences, and the learned representations can be transferred to many downstream tasks and applications. The main idea of a time series foundation model (TSFM) is to apply the same playbook - including the transformer architecture that has revolutionized natural language processing - to sequences of numerical data, i.e., time series. Our present focus is to train a univariate TSFM capable of high-quality zero-shot forecasting, with emphasis on time series arising in certain business domains (initially, observability). Thus, having been exposed to patterns across many time series during training, given a segment of a new (unseen) time series, the TSFM is expected to predict its subsequent segment without any auxiliary parameter adjustment or fitting. Architectural differences among TSFMs can be found in their approaches to tokenization, transformer configuration, and prediction heads. PatchTST [Nie+23] introduces the idea of a time series patch as the analogue of a token, uses a linear transformation of a patch as a replacement for the token embedding, and finally applies a standard transformer encoder architecture. TimesFM [Das+24] uses a residual block to embed time series patches, enabling learning of more complex representations, and applies a decoder-only architecture. Chronos [Ans+24] tokenizes individual data points via scaling and then applies the (encoder-decoder) T5 architecture [Raf+20], notably formulating forecasting as a classification problem; subsequent versions (Chronos-Bolt, Chronos-2 [Ans+25]) utilize patching and "meta features" before applying transformer layers, and Chronos-2 uses a T5 encoder.
Nov-26-2025
- Country:
- Asia > Afghanistan > Parwan Province > Charikar (0.04)
- Genre:
- Research Report (0.50)
- Technology: