Evidential Uncertainty Probes for Graph Neural Networks
Yu, Linlin, Li, Kangshuo, Saha, Pritom Kumar, Lou, Yifei, Chen, Feng
–arXiv.org Artificial Intelligence
Accurate quantification of both aleatoric and epistemic uncertainties is essential when deploying Graph Neural Networks (GNNs) in high-stakes applications such as drug discovery and financial fraud detection, where reliable predictions are critical. Although Evidential Deep Learning (EDL) efficiently quantifies uncertainty using a Dirichlet distribution over predictive probabilities, existing EDL-based GNN (EGNN) models require modifications to the network architecture and retraining, failing to take advantage of pre-trained models. We propose a plug-and-play framework for uncertainty quantification in GNNs that works with pre-trained models without the need for retraining. Our Evidential Probing Network (EPN) uses a lightweight Multi-Layer-Perceptron (MLP) head to extract evidence from learned representations, allowing efficient integration with various GNN architectures. We further introduce evidence-based regularization techniques, referred to as EPN-reg, to enhance the estimation of epistemic uncertainty with theoretical justifications. Extensive experiments demonstrate that the proposed EPN-reg achieves state-of-the-art performance in accurate and efficient uncertainty quantification, making it suitable for real-world deployment.
arXiv.org Artificial Intelligence
Mar-11-2025
- Country:
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Education (0.67)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.34)
- Technology: