autoformer
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.05)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Health & Medicine (1.00)
- Government (0.68)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Data Science (0.71)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting
Extending the forecasting time is a critical demand for real applications, such as extreme weather early warning and long-term energy consumption planning. This paper studies the long-term forecasting problem of time series. Prior Transformer-based models adopt various self-attention mechanisms to discover the long-range dependencies. However, intricate temporal patterns of the long-term future prohibit the model from finding reliable dependencies. Also, Transformers have to adopt the sparse versions of point-wise self-attentions for long series efficiency, resulting in the information utilization bottleneck.
Lightweight and Data-Efficient MultivariateTime Series Forecasting using Residual-Stacked Gaussian (RS-GLinear) Architecture
-- Following the success of Transformer architectures and their self - attention mechanism in language modelling -- particularly due to their ability to capture long - range dependencies -- many researchers have explored how these architectures can be adopted for time - series forecasting. Varia nts of Transformer - based models have been proposed to handle both short - and long - term sequence modeling, aiming to predict future time - dependent values from historical observations using varying input window sizes. However, despite the popularity of lever a ging Transformer architecture to extract temporal relationships from set of continu ou s datapoints, their performance in time - series forecasting has shown mixed results. Several researchers, including Zeng et al. (2022) and Rizvi et al. (2025), have challenged the reliability of emerging Transformer - based solutions for long - term forecasting tasks. In this research, our first objective is to evaluate the G aussian - based Linear (GLinear) architecture proposed by Ri z vi et al. (2025) and to develop an enhanced ve rsion of it -- referred to in this study as Residual Stacked GLinear (RS - GLinear) model. The second objective is to assess the broader applicability of the RS - GLinear model by extending its use to additional domain -- financial time series and epidemiological data -- which were not explored in the baseline model proposed by Rizvi et al. (2025). Most time - series implementations (Transformer - based and Linear models) we came across commonly adopt baseline codebases provided by the Hugging Face repository, including our baseline GLinear model used in this study. Therefore, the RS - GLinear model developed in this study is an extended version of the codebase introduced in the research by Rizvi et al. (2025) . Keywords -- Multivariate Time Series Forecasting, Transformer - based models, Weather, Influenza - like Illness, Deep Learning, Transformer - based architecture, Residual - Stacked GLinear, Neural - Network. Time series forecasting has been an important research area in many domains such as finance/economics, retail, healthcare, cloud infrastructure, met eo rology, and traffic management (Toner e t al. 2024). Since the introduction of T ransformer Model (Vaswani et al. 2017), there has been large amount of research focusing on time - series forecasting using Large Language Models (LLM) to leverage LLM's sequential dependencies in text generation (Tan et al. 2024).
- Pacific Ocean > North Pacific Ocean > San Francisco Bay (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Europe > Germany (0.04)
- Asia > China (0.04)
- Health & Medicine > Therapeutic Area (1.00)
- Health & Medicine > Epidemiology (1.00)
Quantum-Optimized Selective State Space Model for Efficient Time Series Prediction
Jura, Stefan-Alexandru, Udrescu, Mihai, Topirceanu, Alexandru
Long-range time series forecasting remains challenging, as it requires capturing non-stationary and multi-scale temporal dependencies while maintaining noise robustness, efficiency, and stability. Transformer-based architectures such as Autoformer and Informer improve generalization but suffer from quadratic complexity and degraded performance on very long time horizons. State space models, notably S-Mamba, provide linear-time updates but often face unstable training dynamics, sensitivity to initialization, and limited robustness for multivariate forecasting. To address such challenges, we propose the Quantum-Optimized Selective State Space Model (Q-SSM), a hybrid quantum-optimized approach that integrates state space dynamics with a variational quantum gate. Instead of relying on expensive attention mechanisms, Q-SSM employs a simple parametrized quantum circuit (RY-RX ansatz) whose expectation values regulate memory updates adaptively. This quantum gating mechanism improves convergence stability, enhances the modeling of long-term dependencies, and provides a lightweight alternative to attention. We empirically validate Q-SSM on three widely used benchmarks, i.e., ETT, Traffic, and Exchange Rate. Results show that Q-SSM consistently improves over strong baselines (LSTM, TCN, Reformer), Transformer-based models, and S-Mamba. These findings demonstrate that variational quantum gating can address current limitations in long-range forecasting, leading to accurate and robust multivariate predictions.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.04)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay (0.04)
- (2 more...)
- Energy (0.46)
- Banking & Finance (0.34)
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.04)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Health & Medicine (1.00)
- Government (0.68)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Data Science (0.70)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
Frequency-Constrained Learning for Long-Term Forecasting
Kong, Menglin, Zheng, Vincent Zhihao, Sun, Lijun
However, modern deep forecasting models often fail to capture these recurring patterns due to spectral bias and a lack of frequency-aware inductive priors. Motivated by this gap, we propose a simple yet effective method that enhances long-term forecasting by explicitly modeling periodicity through spectral initialization and frequency-constrained optimization. Specifically, we extract dominant low-frequency components via Fast Fourier Transform (FFT)-guided coordinate descent, initialize sinusoidal embeddings with these components, and employ a two-speed learning schedule to preserve meaningful frequency structure during training. Our approach is model-agnostic and integrates seamlessly into existing Transformer-based architectures. Extensive experiments across diverse real-world benchmarks demonstrate consistent performance gains--particularly at long horizons--highlighting the benefits of injecting spectral priors into deep temporal models for robust and interpretable long-range forecasting. Moreover, on synthetic data, our method accurately recovers ground-truth frequencies, further validating its interpretability and effectiveness in capturing latent periodic patterns.
- North America > United States > California (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
- (2 more...)
A Review of the Long Horizon Forecasting Problem in Time Series Analysis
Krupakar, Hans, A, Kandappan V
The long horizon forecasting (LHF) problem has come up in the time series literature for over the last 35 years or so. This review covers aspects of LHF in this period and how deep learning has incorporated variants of trend, seasonality, fourier and wavelet transforms, misspecification bias reduction and bandpass filters while contributing using convolutions, residual connections, sparsity reduction, strided convolutions, attention masks, SSMs, normalization methods, low-rank approximations and gating mechanisms. We highlight time series decomposition techniques, input data preprocessing and dataset windowing schemes that improve performance. Multi-layer perceptron models, recurrent neural network hybrids, self-attention models that improve and/or address the performances of the LHF problem are described, with an emphasis on the feature space construction. Ablation studies are conducted over the ETTm2 dataset in the multivariate and univariate high useful load (HUFL) forecasting contexts, evaluated over the last 4 months of the dataset. The heatmaps of MSE averages per time step over test set series in the horizon show that there is a steady increase in the error proportionate to its length except with xLSTM and Triformer models and motivate LHF as an error propagation problem. The trained models are available here: https://bit.ly/LHFModelZoo
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.04)
- Asia > India > Tamil Nadu > Chennai (0.04)
- Oceania > New Zealand (0.04)
- (2 more...)
Transformer-Based Decomposition of Electrodermal Activity for Real-World Mental Health Applications
Tsirmpas, Charalampos, Konstantopoulos, Stasinos, Andrikopoulos, Dimitris, Kyriakouli, Konstantina, Fatouros, Panagiotis
Decomposing Electrodermal Activity (EDA) into phasic (short-term, stimulus-linked responses) and tonic (longer-term baseline) components is essential for extracting meaningful emotional and physiological biomarkers. This study presents a comparative analysis of knowledge-driven, statistical, and deep learning-based methods for EDA signal decomposition, with a focus on in-the-wild data collected from wearable devices. In particular, the authors introduce the Feel Transformer, a novel Transformer-based model adapted from the Autoformer architecture, designed to separate phasic and tonic components without explicit supervision. The model leverages pooling and trend-removal mechanisms to enforce physiologically meaningful decompositions. Comparative experiments against methods such as Ledalab, cvxEDA, and conventional detrending show that the Feel Transformer achieves a balance between feature fidelity (SCR frequency, amplitude, and tonic slope) and robustness to noisy, real-world data. The model demonstrates potential for real-time biosignal analysis and future applications in stress prediction, digital mental health interventions, and physiological forecasting.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- (4 more...)
Synthetic Time Series Forecasting with Transformer Architectures: Extensive Simulation Benchmarks
Forootani, Ali, Khosravi, Mohammad
Time series forecasting plays a critical role in domains such as energy, finance, and healthcare, where accurate predictions inform decision-making under uncertainty. Although Transformer-based models have demonstrated success in sequential modeling, their adoption for time series remains limited by challenges such as noise sensitivity, long-range dependencies, and a lack of inductive bias for temporal structure. In this work, we present a unified and principled framework for benchmarking three prominent Transformer forecasting architectures-Autoformer, Informer, and Patchtst-each evaluated through three architectural variants: Minimal, Standard, and Full, representing increasing levels of complexity and modeling capacity. We conduct over 1500 controlled experiments on a suite of ten synthetic signals, spanning five patch lengths and five forecast horizons under both clean and noisy conditions. Our analysis reveals consistent patterns across model families. To advance this landscape further, we introduce the Koopman-enhanced Transformer framework, Deep Koopformer, which integrates operator-theoretic latent state modeling to improve stability and interpretability. We demonstrate its efficacy on nonlinear and chaotic dynamical systems. Our results highlight Koopman based Transformer as a promising hybrid approach for robust, interpretable, and theoretically grounded time series forecasting in noisy and complex real-world conditions.
- Europe > Netherlands > South Holland > Delft (0.04)
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- Europe > Germany > Saxony > Leipzig (0.04)
- Research Report > New Finding (0.87)
- Research Report > Strength High (0.54)
- Research Report > Experimental Study (0.54)