GPT-ST: Generative Pre-Training of Spatio-Temporal Graph Neural Networks
Li, Zhonghang, Xia, Lianghao, Xu, Yong, Huang, Chao
–arXiv.org Artificial Intelligence
In recent years, there has been a rapid development of spatio-temporal prediction techniques in response to the increasing demands of traffic management and travel planning. While advanced end-to-end models have achieved notable success in improving predictive performance, their integration and expansion pose significant challenges. This work aims to address these challenges by introducing a spatio-temporal pre-training framework that seamlessly integrates with downstream baselines and enhances their performance. The framework is built upon two key designs: (i) We propose a spatio-temporal mask autoencoder as a pre-training model for learning spatio-temporal dependencies. The model incorporates customized parameter learners and hierarchical spatial pattern encoding networks. These modules are specifically designed to capture spatio-temporal customized representations and intra- and inter-cluster region semantic relationships, which have often been neglected in existing approaches. (ii) We introduce an adaptive mask strategy as part of the pre-training mechanism. This strategy guides the mask autoencoder in learning robust spatio-temporal representations and facilitates the modeling of different relationships, ranging from intra-cluster to inter-cluster, in an easy-to-hard training manner. Extensive experiments conducted on representative benchmarks demonstrate the effectiveness of our proposed method. We have made our model implementation publicly available at https://github.com/HKUDS/GPT-ST.
arXiv.org Artificial Intelligence
Nov-6-2023
- Country:
- Asia > China (0.28)
- North America > United States (0.28)
- Genre:
- Research Report (1.00)
- Industry:
- Consumer Products & Services > Travel (0.67)
- Transportation (1.00)
- Technology: