A Foundation Graph Model
Davies, Alex O., Green, Riku W., Ajmeri, Nirav S., Filho, Telmo M. Silva
–arXiv.org Artificial Intelligence
The principal benefit of unsupervised graph representation learning is that a pre-trained model can be fine-tuned where data or labels are scarce. Existing approaches are domain specific, maintaining consistent node and edge attributes across the pre-training and target datasets. This precludes transfer to other domains. A model capable of positive transfer on arbitrary tasks and domains would represent the first foundation graph model. In this work we use adversarial contrastive learning to present FoToM, a graph pre-training method based on node and edge feature exclusion. We use FoToM to pre-train models over multiple graph domains, producing the first foundation graph models. We demonstrate positive transfer on evaluation datasets from multiple domains, including domains not present in pre-training data. On all datasets performance is at worst on-par and on 76% significantly better than a supervised baseline ($P \leq 0.01$), with an 8 to 40% reduction in error at 95% confidence. Contrary to other research, pre-training on a dataset with the target domain excluded leads us to better performance than pre-training on a dataset from only the target domain. The multi-domain model at worst, matches, and on 56% of tasks, significantly outperforms single-domain ($P \leq 0.01$). These results include when node labels are used in evaluation, where performance is consistently superior to single-domain or non-pre-trained models. Notably, FoToM benefits scenarios in both large or scarce data regimes for the target domains.
arXiv.org Artificial Intelligence
Jan-19-2024
- Country:
- Europe
- Ireland (0.04)
- United Kingdom > England
- Bristol (0.04)
- North America > United States
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Pennsylvania (0.04)
- Louisiana > Orleans Parish
- Europe
- Genre:
- Research Report > New Finding (0.93)
- Technology: