Robust Training of Temporal GNNs using Nearest Neighbours based Hard Negatives
Gupta, Shubham, Bedathur, Srikanta
–arXiv.org Artificial Intelligence
Temporal graph neural networks Tgnn have exhibited state-of-art performance in future-link prediction tasks. Training of these TGNNs is enumerated by uniform random sampling based unsupervised loss. During training, in the context of a positive example, the loss is computed over uninformative negatives, which introduces redundancy and sub-optimal performance. In this paper, we propose modified unsupervised learning of Tgnn, by replacing the uniform negative sampling with importance-based negative sampling. We theoretically motivate and define the dynamically computed distribution for a sampling of negative examples. Finally, using empirical evaluations over three real-world datasets, we show that Tgnn trained using loss based on proposed negative sampling provides consistent superior performance.
arXiv.org Artificial Intelligence
Feb-14-2024
- Country:
- Asia > India
- NCT (0.14)
- Europe (0.68)
- North America > United States
- California (0.14)
- New York (0.15)
- Asia > India
- Genre:
- Research Report (0.50)
- Technology: