Pre-training Graph Neural Networks with Structural Fingerprints for Materials Discovery
Jia, Shuyi, Govil, Shitij, Ramprasad, Manav, Fung, Victor
–arXiv.org Artificial Intelligence
In recent years, pre-trained graph neural networks (GNNs) have been developed as general models which can be effectively fine-tuned for various potential downstream tasks in materials science, and have shown significant improvements in accuracy and data efficiency. The most widely used pre-training methods currently involve either supervised training to fit a general force field or self-supervised training by denoising atomic structures equilibrium. Both methods require datasets generated from quantum mechanical calculations, which quickly become intractable when scaling to larger datasets. Here we propose a novel pre-training objective which instead uses cheaply-computed structural fingerprints as targets while maintaining comparable performance across a range of different structural descriptors. Our experiments show this approach can act as a general strategy for pre-training GNNs with application towards large scale foundational models for atomistic data.
arXiv.org Artificial Intelligence
Mar-3-2025
- Country:
- Asia > Middle East
- Israel > Mediterranean Sea (0.24)
- North America > United States (0.46)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.46)
- Technology: