ProtoTS: Learning Hierarchical Prototypes for Explainable Time Series Forecasting
Peng, Ziheng, Ren, Shijie, Gu, Xinyue, Yang, Linxiao, Wang, Xiting, Sun, Liang
–arXiv.org Artificial Intelligence
While deep learning has achieved impressive performance in time series forecasting, it becomes increasingly crucial to understand its decision-making process for building trust in high-stakes scenarios. Existing interpretable models often provide only local and partial explanations, lacking the capability to reveal how heterogeneous and interacting input variables jointly shape the overall temporal patterns in the forecast curve. We propose ProtoTS, a novel interpretable forecasting framework that achieves both high accuracy and transparent decision-making through modeling prototypical temporal patterns. ProtoTS computes instance-prototype similarity based on a denoised representation that preserves abundant heterogeneous information. The prototypes are organized hierarchically to capture global temporal patterns with coarse prototypes while capturing finer-grained local variations with detailed prototypes, enabling expert steering and multi-level interpretability. Experiments on multiple realistic benchmarks, including a newly released LOF dataset, show that ProtoTS not only exceeds existing methods in forecast accuracy but also delivers expert-steerable interpretations for better model understanding and decision support. Time series forecasting has been widely applied in high-stakes scenarios such as load forecasting (Jiang et al., 2024; Y ang et al., 2023), energy management (Deb et al., 2017; Weron, 2014), weather prediction (Angryk et al., 2020; Karevan & Suykens, 2020), all of which involve considerable financial impacts. In these applications, while achieving high forecast accuracy is crucial, understanding why and how the model makes specific predictions is equally important. It aids in preventing substantial financial losses and building the trust necessary (Rojat et al., 2021). A range of explainable time series forecasting methods have been developed to simultaneously ensure interpretability and good predictive performance (Oreshkin et al., 2019; Lim et al., 2021; Zhao et al., 2024; Lin et al., 2024). However, their overall interpretability and potential for further performance improvement are limited, since they mainly provide local, partial explanations for both the output and input sides: C1: For the output side, existing methods (Lim et al., 2021; Zhao et al., 2024) mainly explain the prediction at individual time steps, lacking the ability to help users quickly interpret the reasons behind the overall trend in the forecast curve. For each instance, model computes its similarity to all prototypes to form a prediction, enabling detailed local interpretation.
arXiv.org Artificial Intelligence
Dec-2-2025
- Country:
- Asia > China
- Beijing > Beijing (0.04)
- Zhejiang Province > Hangzhou (0.04)
- Europe
- Belgium (0.04)
- France (0.04)
- Northern Europe (0.04)
- North America
- Trinidad and Tobago > Trinidad
- United States
- Maryland (0.04)
- New Jersey (0.04)
- Pennsylvania (0.04)
- Asia > China
- Genre:
- Research Report (1.00)
- Industry:
- Energy > Power Industry (1.00)
- Technology: