Goto

Collaborating Authors

 long-term prediction




Temporal Continual Learning with Prior Compensation for Human Motion Prediction

Neural Information Processing Systems

To better preserve prior information, we introduce the Prior Compensation Factor (PCF). We incorporate it into the model training to compensate for the lost prior information. Furthermore, we derive a more reasonable optimization objective through theoretical derivation.


Adaptive Spatio-Temporal Graphs with Self-Supervised Pretraining for Multi-Horizon Weather Forecasting

Liu, Yao

arXiv.org Artificial Intelligence

Accurate and robust weather forecasting remains a fundamental challenge due to the inherent spatio-temporal complexity of atmospheric systems. In this paper, we propose a novel self-supervised learning framework that leverages spatio-temporal structures to improve multi-variable weather prediction. The model integrates a graph neural network (GNN) for spatial reasoning, a self-supervised pretraining scheme for representation learning, and a spatio-temporal adaptation mechanism to enhance generalization across varying forecasting horizons. Extensive experiments on both ERA5 and MERRA-2 reanalysis datasets demonstrate that our approach achieves superior performance compared to traditional numerical weather prediction (NWP) models and recent deep learning methods. Quantitative evaluations and visual analyses in Beijing and Shanghai confirm the model's capability to capture fine-grained meteorological patterns. The proposed framework provides a scalable and label-efficient solution for future data-driven weather forecasting systems.


Temporal Continual Learning with Prior Compensation for Human Motion Prediction

Neural Information Processing Systems

To better preserve prior information, we introduce the Prior Compensation Factor (PCF). We incorporate it into the model training to compensate for the lost prior information. Furthermore, we derive a more reasonable optimization objective through theoretical derivation.




GTS_Forecaster: a novel deep learning based geodetic time series forecasting toolbox with python

Liang, Xuechen, He, Xiaoxing, Wang, Shengdao, Montillet, Jean-Philippe, Huang, Zhengkai, Kermarrec, Gaël, Hu, Shunqiang, Zhou, Yu, Huang, Jiahui

arXiv.org Artificial Intelligence

Geodetic time series -- such as Global Navigation Satellite System (GNSS) positions, satellite altimetry-derived sea surface height (SSH), and tide gauge (TG) records -- is essential for monitoring surface deformation and sea level change. Accurate forecasts of these variables can enhance early warning systems and support hazard mitigation for earthquakes, landslides, coastal storm surge, and long-term sea level. However, the nonlinear, non-stationary, and incomplete nature of such variables presents significant challenges for classic models, which often fail to capture long-term dependencies and complex spatiotemporal dynamics. We introduce GTS Forecaster, an open-source Python package for geodetic time series forecasting. It integrates advanced deep learning models -- including kernel attention networks (KAN), graph neural network-based gated recurrent units (GNNGRU), and time-aware graph neural networks (TimeGNN) -- to effectively model nonlinear spatial-temporal patterns. The package also provides robust preprocessing tools, including outlier detection and a reinforcement learning-based gap-filling algorithm, the Kalman-TransFusion Interpolation Framework (KTIF). GTS Forecaster currently supports forecasting, visualization, and evaluation of GNSS, SSH, and TG datasets, and is adaptable to general time series applications. By combining cutting-edge models with an accessible interface, it facilitates the application of deep learning in geodetic forecasting tasks.


Improving Long-term Autoregressive Spatiotemporal Predictions: A Proof of Concept with Fluid Dynamics

Zhou, Hao, Cheng, Sibo

arXiv.org Artificial Intelligence

Data-driven approaches have emerged as a powerful alternative to traditional numerical methods for forecasting physical systems, offering fast inference and reduced computational costs. However, for complex systems and those without prior knowledge, the accuracy of long-term predictions frequently deteriorates due to error accumulation. Existing solutions often adopt an autoregressive approach that unrolls multiple time steps during each training iteration; although effective for long-term forecasting, this method requires storing entire unrolling sequences in GPU memory, leading to high resource demands. Moreover, optimizing for long-term accuracy in autoregressive frameworks can compromise short-term performance. To address these challenges, we introduce the Stochastic PushForward (SPF) training framework in this paper. SPF preserves the one-step-ahead training paradigm while still enabling multi-step-ahead learning. It dynamically constructs a supplementary dataset from the model's predictions and uses this dataset in combination with the original training data. By drawing inputs from both the ground truth and model-generated predictions through a stochastic acquisition strategy, SPF naturally balances short-and long-term predictive performance and further reduces overfitting and improves generalization. Furthermore, the training process is executed in a one-step-ahead manner, with multi-step-ahead predictions precomputed between epochs--thus eliminating the need to retain entire unrolling sequences in memory, thus keeping memory usage stable. We demonstrate the effectiveness of SPF on the Burgers' equation and the Shallow Water benchmark. Experimental results demonstrated that SPF delivers superior long-term accuracy compared to autoregressive approaches while reducing memory consumption. Supplementary dataset update interval Test cases V Flow speed for Burgers' equation h Total water depth including the undisturbed water depth u, v Velocity components in the x (horizontal) and y (vertical) directions g Gravitational acceleration r Spatial euclidean distance ϵ Balgovind type of correlation function L Typical correlation length scale 2 1. Introduction Over many years, scientific research has produced highly detailed mathematical models of physical phenomena[1]. These models are frequently and naturally expressed in the form of differential equations [2], most commonly as time-dependent Partial differential equation (PDE)s.


DEL: Discrete Element Learner for Learning 3D Particle Dynamics with Neural Rendering

Neural Information Processing Systems

To mitigate such uncertainty, we consider a conventional, mechanically interpretable framework as the physical priors and extend it to a learning-based version. In brief, we incorporate the learnable graph kernels into the classic Discrete Element Analysis (DEA) framework to implement a novel mechanics-integrated learning system.