pgexplainer
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > Hungary > Hajdú-Bihar County > Debrecen (0.04)
- North America > United States > Texas > Brazos County > College Station (0.14)
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Asia > Middle East > Jordan (0.04)
Parameterized Explainer for Graph Neural Network
Despite recent progress in Graph Neural Networks (GNNs), explaining predictions made by GNNs remains a challenging open problem. The leading method mainly addresses the local explanations (i.e., important subgraph structure and node features) to interpret why a GNN model makes the prediction for a single instance, e.g. a node or a graph. As a result, the explanation generated is painstakingly customized for each instance. The unique explanation interpreting each instance independently is not sufficient to provide a global understanding of the learned GNN model, leading to the lack of generalizability and hindering it from being used in the inductive setting. Besides, as it is designed for explaining a single instance, it is challenging to explain a set of instances naturally (e.g., graphs of a given class). In this study, we address these key challenges and propose PGExplainer, a parameterized explainer for GNNs. PGExplainer adopts a deep neural network to parameterize the generation process of explanations, which enables PGExplainer a natural approach to multi-instance explanations. Compared to the existing work, PGExplainer has a better generalization power and can be utilized in an inductive setting easily. Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7\% relative improvement in AUC on explaining graph classification over the leading baseline.
- North America > United States > Pennsylvania (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- (4 more...)
As discussed in lines 47-52, to 2 explain a set of instances, GNNExplainer first interprets a representative instance and then adopts ad-hoc post-analysis
We appreciate the valuable feedback from all the reviewers and will include the following discussion into our work. We believe that this is not an elegant way to have a global view of the GNN model. "Since the explanatory motifs are not learned end-to-end, the model however may suffer from sub-optimal generalization PGExplainer is natively designed for collectively explaining multiple instances. The source code of PGExplainer can be found in GitHub with the name "PGExplainer". We follow the experimental setting in GNNExplainer. "explanation accuracy" is not formally defined. We didn't report std for baselines because they don't have sampling processes in Baselines' stds are shown in the table below. PGExplainer is a general model compatible with different GNNs and diverse learning tasks. Besides, instead of edge-level important scores, they just calculate node-level important scores. We select a method "Gradient" in the CVPR paper which doesn't require The AUC scores on BA-2motifs and MUT AG are 0.773 and 0.653, The KDD paper mentioned just showed up (June 3). Second, it only provides model-level explanations without preserving the local fidelity. That's why we call it a global method. As discussed in [38], "local fidelity" requires an explanation PGExplainer to preserve the "local fidelity", at the same time, with a global view of the GNN model. GNNExplainer is a pioneer to provide explanations for GNN's predictions. We include a parameterized network to enable explainer a global view of the GNN model. PGExplainer is much more effective and efficient than the state-of-the-art method. As discussed in Appendix D.1, "PGExplainer
- North America > United States > Texas > Brazos County > College Station (0.14)
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Asia > Middle East > Jordan (0.04)
Parameterized Explainer for Graph Neural Network
Despite recent progress in Graph Neural Networks (GNNs), explaining predictions made by GNNs remains a challenging open problem. The leading method mainly addresses the local explanations (i.e., important subgraph structure and node features) to interpret why a GNN model makes the prediction for a single instance, e.g. a node or a graph. As a result, the explanation generated is painstakingly customized for each instance. The unique explanation interpreting each instance independently is not sufficient to provide a global understanding of the learned GNN model, leading to the lack of generalizability and hindering it from being used in the inductive setting. Besides, as it is designed for explaining a single instance, it is challenging to explain a set of instances naturally (e.g., graphs of a given class).