Goto

Collaborating Authors

 temporal pattern




DeepExplicitDurationSwitchingModels forTimeSeries

Neural Information Processing Systems

Time series forecasting plays akeyrole in informing industrial and business decisions [17,24,8], while segmentation isuseful forunderstanding biological andphysicalsystems [40,45,34].




1d051fb631f104cb2a621451f37676b9-Paper-Conference.pdf

Neural Information Processing Systems

Recent advances inface forgery techniques produce nearly visually untraceable deepfake videos, which could be leveraged with malicious intentions. As a result, researchers havebeen devoted todeepfakedetection.


Time series forecasting with Hahn Kolmogorov-Arnold networks

Hasan, Md Zahidul, Hamza, A. Ben, Bouguila, Nizar

arXiv.org Machine Learning

Recent Transformer- and MLP-based models have demonstrated strong performance in long-term time series forecasting, yet Transformers remain limited by their quadratic complexity and permutation-equivariant attention, while MLPs exhibit spectral bias. We propose HaKAN, a versatile model based on Kolmogorov-Arnold Networks (KANs), leveraging Hahn polynomial-based learnable activation functions and providing a lightweight and interpretable alternative for multivariate time series forecasting. Our model integrates channel independence, patching, a stack of Hahn-KAN blocks with residual connections, and a bottleneck structure comprised of two fully connected layers. The Hahn-KAN block consists of inter- and intra-patch KAN layers to effectively capture both global and local temporal patterns. Extensive experiments on various forecasting benchmarks demonstrate that our model consistently outperforms recent state-of-the-art methods, with ablation studies validating the effectiveness of its core components.


ProtoTS: Learning Hierarchical Prototypes for Explainable Time Series Forecasting

Peng, Ziheng, Ren, Shijie, Gu, Xinyue, Yang, Linxiao, Wang, Xiting, Sun, Liang

arXiv.org Artificial Intelligence

While deep learning has achieved impressive performance in time series forecasting, it becomes increasingly crucial to understand its decision-making process for building trust in high-stakes scenarios. Existing interpretable models often provide only local and partial explanations, lacking the capability to reveal how heterogeneous and interacting input variables jointly shape the overall temporal patterns in the forecast curve. We propose ProtoTS, a novel interpretable forecasting framework that achieves both high accuracy and transparent decision-making through modeling prototypical temporal patterns. ProtoTS computes instance-prototype similarity based on a denoised representation that preserves abundant heterogeneous information. The prototypes are organized hierarchically to capture global temporal patterns with coarse prototypes while capturing finer-grained local variations with detailed prototypes, enabling expert steering and multi-level interpretability. Experiments on multiple realistic benchmarks, including a newly released LOF dataset, show that ProtoTS not only exceeds existing methods in forecast accuracy but also delivers expert-steerable interpretations for better model understanding and decision support. Time series forecasting has been widely applied in high-stakes scenarios such as load forecasting (Jiang et al., 2024; Y ang et al., 2023), energy management (Deb et al., 2017; Weron, 2014), weather prediction (Angryk et al., 2020; Karevan & Suykens, 2020), all of which involve considerable financial impacts. In these applications, while achieving high forecast accuracy is crucial, understanding why and how the model makes specific predictions is equally important. It aids in preventing substantial financial losses and building the trust necessary (Rojat et al., 2021). A range of explainable time series forecasting methods have been developed to simultaneously ensure interpretability and good predictive performance (Oreshkin et al., 2019; Lim et al., 2021; Zhao et al., 2024; Lin et al., 2024). However, their overall interpretability and potential for further performance improvement are limited, since they mainly provide local, partial explanations for both the output and input sides: C1: For the output side, existing methods (Lim et al., 2021; Zhao et al., 2024) mainly explain the prediction at individual time steps, lacking the ability to help users quickly interpret the reasons behind the overall trend in the forecast curve. For each instance, model computes its similarity to all prototypes to form a prediction, enabling detailed local interpretation.


AutoHFormer: Efficient Hierarchical Autoregressive Transformer for Time Series Prediction

Zhang, Qianru, Wen, Honggang, Li, Ming, Huang, Dong, Yiu, Siu-Ming, Jensen, Christian S., Liò, Pietro

arXiv.org Artificial Intelligence

Abstract--Time series forecasting requires architectures that simultaneously achieve three competing objectives: (1) strict temporal causality for reliable predictions, (2) sub-quadratic complexity for practical scalability, and (3) multi-scale pattern recognition for accurate long-horizon forecasting. We introduce AutoHFormer, a hierarchical autoregressive transformer that addresses these challenges through three key innovations: 1) Hierarchical Temporal Modeling: Our architecture decomposes predictions into segment-level blocks processed in parallel, followed by intra-segment sequential refinement. This dual-scale approach maintains temporal coherence while enabling efficient computation. This design avoids both the anti-causal violations of standard transformers and the sequential bottlenecks of RNN hybrids. It combines fixed oscillating patterns for short-term variations with learnable decay rates for long-term trends. Comprehensive experiments demonstrate that AutoHFormer 10.76 faster training and 6.06 memory reduction compared to PatchTST on PEMS08, while maintaining consistent accuracy across 96-720 step horizons in most of cases. These breakthroughs establish new benchmarks for efficient and precise time series modeling. I. Introduction Time series forecasting [1, 2, 3, 4, 5] stands as a fundamental pillar of modern predictive analytics, enabling data-driven decision making across numerous mission-critical domains. As demonstrated in recent literature [6, 7], this task has become increasingly vital in our data-rich era.


OccamVTS: Distilling Vision Models to 1% Parameters for Time Series Forecasting

Lyu, Sisuo, Zhong, Siru, Ruan, Weilin, Liu, Qingxiang, Wen, Qingsong, Xiong, Hui, Liang, Yuxuan

arXiv.org Artificial Intelligence

Time series forecasting is fundamental to diverse applications, with recent approaches leverage large vision models (LVMs) to capture temporal patterns through visual representations. We reveal that while vision models enhance forecasting performance, 99% of their parameters are unnecessary for time series tasks. Through cross-modal analysis, we find that time series align with low-level textural features but not high-level semantics, which can impair forecasting accuracy. We propose OccamVTS, a knowledge distillation framework that extracts only the essential 1% of predictive information from LVMs into lightweight networks. Using pre-trained LVMs as privileged teachers, OccamVTS employs pyramid-style feature alignment combined with correlation and feature distillation to transfer beneficial patterns while filtering out semantic noise. Counterintuitively, this aggressive parameter reduction improves accuracy by eliminating overfitting to irrelevant visual features while preserving essential temporal patterns. Extensive experiments across multiple benchmark datasets demonstrate that OccamVTS consistently achieves state-of-the-art performance with only 1% of the original parameters, particularly excelling in few-shot and zero-shot scenarios.