Empowering Dual-Level Graph Self-Supervised Pretraining with Motif Discovery
Yan, Pengwei, Song, Kaisong, Jiang, Zhuoren, Kang, Yangyang, Lin, Tianqianjin, Sun, Changlong, Liu, Xiaozhong
–arXiv.org Artificial Intelligence
While self-supervised graph pretraining techniques have shown promising results in various domains, their application still experiences challenges of limited topology learning, human knowledge dependency, and incompetent multi-level interactions. To address these issues, we propose a novel solution, Dual-level Graph self-supervised Pretraining with Motif discovery (DGPM), which introduces a unique dual-level pretraining structure that orchestrates node-level and subgraph-level pretext tasks. Unlike prior approaches, DGPM autonomously uncovers significant graph motifs through an edge pooling module, aligning learned motif similarities with graph kernel-based similarities. A cross-matching task enables sophisticated node-motif interactions and novel representation learning. Extensive experiments on 15 datasets validate DGPM's effectiveness and generalizability, outperforming state-of-the-art methods in unsupervised representation learning and transfer learning settings. The autonomously discovered motifs demonstrate the potential of DGPM to enhance robustness and interpretability.
arXiv.org Artificial Intelligence
Dec-19-2023
- Country:
- Asia > China
- Zhejiang Province (0.14)
- North America > United States (0.28)
- Asia > China
- Genre:
- Research Report > Promising Solution (0.54)
- Industry:
- Technology: