Reinforced Decoder: Towards Training Recurrent Neural Networks for Time Series Forecasting
Sima, Qi, Zhang, Xinze, Bao, Yukun, Yang, Siyue, Shen, Liang
–arXiv.org Artificial Intelligence
Abstract--Recurrent neural network-based sequence-tosequence models have been extensively applied for multi-stepahead time series forecasting. These models typically involve a decoder trained using either its previous forecasts or the actual observed values as the decoder inputs. However, relying on self-generated predictions can lead to the rapid accumulation of errors over multiple steps, while using the actual observations introduces exposure bias as these values are unavailable during the extrapolation stage. In this regard, this study proposes a novel training approach called reinforced decoder, which introduces auxiliary models to generate alternative decoder inputs that remain accessible when extrapolating. Additionally, a reinforcement learning algorithm is utilized to dynamically select the optimal inputs to improve accuracy. ULTI-STEP-AHEAD time series prediction, which involves extrapolating a sequence of future values based extrapolating process, i.e., feeding back the one-step-ahead on historical observations, plays a vital role in various realworld prediction to the decoder to predict the value at the next step. Accordingly, research efforts have been devoted to developing statistical some non-autoregressive architectures were proposed and machine learning techniques for multi-step-ahead time to obviate the error propagation issue [10], [16], [17].
arXiv.org Artificial Intelligence
Jun-13-2024
- Country:
- Asia > China (0.46)
- North America > United States (0.28)
- Genre:
- Research Report > New Finding (0.68)
- Technology: