Goto

Collaborating Authors

 pgm-explainer


ProbabilisticGraphicalModel

Neural Information Processing Systems

Graph Neural Networks (GNNs) have been emerging as powerful solutions to many real-world applications in various domains where the datasets are in form of graphs such as social networks, citationnetworks,knowledgegraphs,andbiologicalnetworks [1,2,3].


PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks

Neural Information Processing Systems

In Graph Neural Networks (GNNs), the graph structure is incorporated into the learning of node representations. This complex structure makes explaining GNNs' predictions become much more challenging. In this paper, we propose PGM-Explainer, a Probabilistic Graphical Model (PGM) model-agnostic explainer for GNNs. Given a prediction to be explained, PGM-Explainer identifies crucial graph components and generates an explanation in form of a PGM approximating that prediction. Different from existing explainers for GNNs where the explanations are drawn from a set of linear functions of explained features, PGM-Explainer is able to demonstrate the dependencies of explained features in form of conditional probabilities. Our theoretical analysis shows that the PGM generated by PGM-Explainer includes the Markov-blanket of the target prediction, i.e. including all its statistical information. We also show that the explanation returned by PGM-Explainer contains the same set of independence statements in the perfect map. Our experiments on both synthetic and real-world datasets show that PGM-Explainer achieves better performance than existing explainers in many benchmark tasks.


A Additive feature attribution methods unify existing explainers for GNNs In this section, we analyze the vanilla gradient-based explainers and GNNExplainer [ 24

Neural Information Processing Systems

GNN and assign important scores on explained features. Here, we consider the simplest gradient-based explanation method in which the score of each feature is associated with the gradient of the GNN's loss function with respect to that feature. The proof that this explanation method falls into the class of additive feature attribution methods is quite straight-forward. S is a good explanation for the target prediction. This experiment setup is the same as that in experiment of Figure 1.





Review for NeurIPS paper: PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks

Neural Information Processing Systems

In this paper, the authors propose a new method to explain GNN. The proposed algorithm PGM-Explainer is technically sounds and novel. Through experiments, the authors demonstrated that the proposed method outperforms GNNexplainer, which is a state-of-the-art GNN explainer. The explanation of GNN is an important research topic and there exist a few methods. Moreover, all reviewers are positive about the paper, and thus, this paper is good to be presented at NeurIPS.


PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks

Neural Information Processing Systems

In Graph Neural Networks (GNNs), the graph structure is incorporated into the learning of node representations. This complex structure makes explaining GNNs' predictions become much more challenging. In this paper, we propose PGM-Explainer, a Probabilistic Graphical Model (PGM) model-agnostic explainer for GNNs. Given a prediction to be explained, PGM-Explainer identifies crucial graph components and generates an explanation in form of a PGM approximating that prediction. Different from existing explainers for GNNs where the explanations are drawn from a set of linear functions of explained features, PGM-Explainer is able to demonstrate the dependencies of explained features in form of conditional probabilities.


An Explainer for Temporal Graph Neural Networks

He, Wenchong, Vu, Minh N., Jiang, Zhe, Thai, My T.

arXiv.org Artificial Intelligence

Temporal graph neural networks (TGNNs) have been widely used for modeling time-evolving graph-related tasks due to their ability to capture both graph topology dependency and non-linear temporal dynamic. The explanation of TGNNs is of vital importance for a transparent and trustworthy model. However, the complex topology structure and temporal dependency make explaining TGNN models very challenging. In this paper, we propose a novel explainer framework for TGNN models. Given a time series on a graph to be explained, the framework can identify dominant explanations in the form of a probabilistic graphical model in a time period. Case studies on the transportation domain demonstrate that the proposed approach can discover dynamic dependency structures in a road network for a time period.