mse 0
- North America > United States > California > San Francisco County > San Francisco (0.04)
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (2 more...)
- Government (0.67)
- Banking & Finance (0.67)
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.04)
- Asia > China > Hong Kong (0.04)
- North America > Canada (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Information Technology > Data Science > Data Mining (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- Information Technology > Artificial Intelligence > Natural Language (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.67)
This Time is Different: An Observability Perspective on Time Series Foundation Models
Cohen, Ben, Khwaja, Emaad, Doubli, Youssef, Lemaachi, Salahidine, Lettieri, Chris, Masson, Charles, Miccinilli, Hugo, Ramé, Elise, Ren, Qiqi, Rostamizadeh, Afshin, Terrail, Jean Ogier du, Toon, Anna-Monica, Wang, Kan, Xie, Stephan, Xu, Zongzhe, Zhukova, Viktoriya, Asker, David, Talwalkar, Ameet, Abou-Amal, Othmane
We introduce Toto, a time series forecasting foundation model with 151 million parameters. Toto uses a modern decoder-only architecture coupled with architectural innovations designed to account for specific challenges found in multivariate observability time series data. Toto's pre-training corpus is a mixture of observability data, open datasets, and synthetic data, and is 4-10$\times$ larger than those of leading time series foundation models. Additionally, we introduce BOOM, a large-scale benchmark consisting of 350 million observations across 2,807 real-world time series. For both Toto and BOOM, we source observability data exclusively from Datadog's own telemetry and internal observability metrics. Extensive evaluations demonstrate that Toto achieves state-of-the-art performance on both BOOM and on established general purpose time series forecasting benchmarks. Toto's model weights, inference code, and evaluation scripts, as well as BOOM's data and evaluation code, are all available as open source under the Apache 2.0 License available at https://huggingface.co/Datadog/Toto-Open-Base-1.0 and https://github.com/DataDog/toto.
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- North America > Mexico > Mexico City > Mexico City (0.04)
- (3 more...)
SynTSBench: Rethinking Temporal Pattern Learning in Deep Learning Models for Time Series
Tan, Qitai, Chen, Yiyun, Li, Mo, Gu, Ruiwen, Su, Yilin, Zhang, Xiao-Ping
Recent advances in deep learning have driven rapid progress in time series forecasting, yet many state-of-the-art models continue to struggle with robust performance in real-world applications, even when they achieve strong results on standard benchmark datasets. This persistent gap can be attributed to the black-box nature of deep learning architectures and the inherent limitations of current evaluation frameworks, which frequently lack the capacity to provide clear, quantitative insights into the specific strengths and weaknesses of different models, thereby complicating the selection of appropriate models for particular forecasting scenarios. To address these issues, we propose a synthetic data-driven evaluation paradigm, SynTSBench, that systematically assesses fundamental modeling capabilities of time series forecasting models through programmable feature configuration. Our framework isolates confounding factors and establishes an interpretable evaluation system with three core analytical dimensions: (1) temporal feature decomposition and capability mapping, which enables systematic evaluation of model capacities to learn specific pattern types; (2) robustness analysis under data irregularities, which quantifies noise tolerance thresholds and anomaly recovery capabilities; and (3) theoretical optimum benchmark-ing, which establishes performance boundaries for each pattern type--enabling direct comparison between model predictions and mathematical optima. Our experiments show that current deep learning models do not universally approach optimal baselines across all types of temporal features.
- Asia > China > Guangdong Province > Shenzhen (0.04)
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.92)
- Banking & Finance > Economy (0.67)
- Energy (0.67)
xTime: Extreme Event Prediction with Hierarchical Knowledge Distillation and Expert Fusion
Li, Quan, Yu, Wenchao, Wang, Suhang, Lin, Minhua, Chen, Lingwei, Cheng, Wei, Chen, Haifeng
Abstract--Extreme events frequently occur in real-world time series and often carry significant practical implications. In domains such as climate and healthcare, these events, such as floods, heatwaves, or acute medical episodes, can lead to serious consequences. Accurate forecasting of such events is therefore of substantial importance. Most existing time series forecasting models are optimized for overall performance within the prediction window, but often struggle to accurately predict extreme events, such as high temperatures or heart rate spikes. The main challenges are data imbalance and the neglect of valuable information contained in intermediate events that precede extreme events. In this paper, we propose xTime, a novel framework for extreme event forecasting in time series. In addition, we introduce a mixture of experts (MoE) mechanism that dynamically selects and fuses outputs from expert models across different rarity levels, which further improves the forecasting performance for extreme events. Experiments on multiple datasets show that xTime achieves consistent improvements, with forecasting accuracy on extreme events improving from 3% to 78%. Time series forecasting plays a fundamental role across a broad spectrum of critical applications, such as stock market analysis, weather and climate modeling, and electricity demand prediction.
- Europe > Italy (0.05)
- Asia > China > Beijing > Beijing (0.05)
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.05)
- (4 more...)
VisionTS++: Cross-Modal Time Series Foundation Model with Continual Pre-trained Vision Backbones
Shen, Lefei, Chen, Mouxiang, Liu, Xu, Fu, Han, Ren, Xiaoxue, Sun, Jianling, Li, Zhuo, Liu, Chenghao
Recent studies have indicated that vision models pre-trained on images can serve as time series foundation models (TSFMs) by reformulating time series forecasting (TSF) as image reconstruction. However, effective cross-modal transfer from vision to time series remains challenging due to three discrepancies: (1) the data-modality gap between structured, bounded image data and unbounded, heterogeneous time series; (2) the multivariate-forecasting gap between fixed RGB-three-channel vision models and time series with arbitrary numbers of variates; and (3) the probabilistic-forecasting gap between the deterministic outputs of vision models and the requirement for uncertainty-aware probabilistic predictions. To bridge these gaps, we propose VisonTS++, a TSFM based on continual pre-training of a vision model on large-scale time series. Our approach introduces three key innovations: (1) vision-model-based filtering to identify high-quality sequences to stabilize pre-training and mitigate modality gap; (2) colorized multivariate conversion, encoding multivariate series as multi-subfigure RGB images to enhance cross-variate modeling; (3) multi-quantile forecasting, using parallel reconstruction heads to generate quantile forecasts without parametric assumptions. Experiments show that VisionTS++ achieves state-of-the-art performance in both in-distribution and out-of-distribution forecasting, outperforming specialized TSFMs by 6%-44% in MSE reduction and ranking first in GIFT-Eval benchmark which comprises 23 datasets across 7 domains. Our work demonstrates that with appropriate adaptation, vision models can effectively generalize to TSF, thus advancing the pursuit of universal TSFMs. Code is available at https://github.com/HALF111/VisionTSpp.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > Middle East > Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- Asia > Middle East > Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- (3 more...)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (2 more...)
- Government (0.67)
- Banking & Finance (0.67)
- North America > Canada (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Information Technology > Data Science > Data Mining (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- Information Technology > Artificial Intelligence > Natural Language (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.67)
Appendix
Section A. Then, we provide extra experimental results in Section B. In Section C, we present details Each data point includes an "oil temperature" value and For data pre-processing, we perform zero-mean normalization, i.e., Table 8: The error bars of SCINet with 5 runs on the ETTh1 dataset.T Metrics Seed 1 Seed 2 Seed 3 Seed 4 Seed 5 Mean Std. The prediction horizon is fixed to be 24 . Our code is implemented with PyTorch. SXM2 GPU (32GB memory), which is sufficient for all our experiments. To enhance the performance in single-step (short-term time series forecasting Sec.
- Pacific Ocean > North Pacific Ocean > San Francisco Bay (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- North America > United States > Alabama (0.04)
- Asia > China (0.04)