Temporal Generalization Estimation in Evolving Graphs
Lu, Bin, Ma, Tingyan, Gan, Xiaoying, Wang, Xinbing, Zhu, Yunqiang, Zhou, Chenghu, Liang, Shiyu
–arXiv.org Artificial Intelligence
Graph Neural Networks (GNNs) are widely deployed in vast fields, but they often struggle to maintain accurate representations as graphs evolve. We theoretically establish a lower bound, proving that under mild conditions, representation distortion inevitably occurs over time. To estimate the temporal distortion without human annotation after deployment, one naive approach is to pre-train a recurrent model (e.g., RNN) before deployment and use this model afterwards, but the estimation is far from satisfactory. In this paper, we analyze the representation distortion from an information theory perspective, and attribute it primarily to inaccurate feature extraction during evolution. The ablation studies underscore the necessity of graph reconstruction. For example, on OGB-arXiv dataset, the estimation metric MAPE deteriorates from 2.19% to 8.00% without reconstruction. The rapid rising of Graph Neural Networks (GNN) leads to widely deployment in various applications, e.g. However, recent studies have uncovered a notable challenge: as the distribution of the graph shifts continuously after deployment, GNNs may suffer from the representation distortion over time, which further leads to continuing performance degradation (Liang et al., 2018; Wu et al., 2022; Lu et al., 2023), as shown in Figure 1. This distribution shift may come from the continuous addition of nodes and edges, changes in network structure or the introduction of new features. This issue becomes particularly salient in applications where the graph evolves rapidly over time.
arXiv.org Artificial Intelligence
Apr-7-2024
- Country:
- Asia (0.67)
- North America > United States
- California (0.14)
- Hawaii (0.14)
- Louisiana (0.14)
- Genre:
- Research Report (1.00)
- Industry:
- Information Technology (0.68)
- Technology: