Ignoring Directionality Leads to Compromised Graph Neural Network Explanations
Sun, Changsheng, Li, Xinke, Dong, Jin Song
–arXiv.org Artificial Intelligence
Graph Neural Networks (GNNs) have emerged as a powerful tool for modeling relational data in applications such as financial fraud detection [1], [2] and social network analysis [3]. As GNNs are increasingly deployed in safety-critical domains where their decisions impact human lives and societal well-being [4], [5], ensuring their trustworthiness has become essential. Unlike traditional software systems, where correctness can often be ensured through formal verification [6], [7], deep learning models--including GNNs--function as black boxes, making it difficult to validate their decisions. To address this, explainability has become essential for deploying GNNs in real-world decision-making pipelines. Recently, post-hoc explanation methods such as GNNExplainer [8] and PGExplainer [9] are widely used to enhance user trust, facilitate model debugging for developers, and provide external validation for regulatory compliance in these black-box GNN models. A useful analogy can be drawn between explaining con-volutional neural networks (CNNs) and GNNs. As shown in Figure 1, CNN explainability method Grad-CAM [10], highlight key image regions influencing a prediction--e.g., focusing on a dog's face to classify it as "dog." Similarly, GNN explainers identify critical subgraph structures affecting predictions.
arXiv.org Artificial Intelligence
Jun-6-2025
- Country:
- Asia
- China > Hong Kong (0.04)
- Singapore > Central Region
- Singapore (0.04)
- Europe
- France > Auvergne-Rhône-Alpes
- United Kingdom (0.28)
- Asia
- Genre:
- Research Report (1.00)
- Industry:
- Law Enforcement & Public Safety > Fraud (0.67)
- Technology: