e37b08dd3015330dcbb5d6663667b8b8-AuthorFeedback.pdf
–Neural Information Processing Systems
We appreciate the valuable feedback from all the reviewers and will include the following discussion into our work. We believe that this is not an elegant way to have a global view of the GNN model. "Since the explanatory motifs are not learned end-to-end, the model however may suffer from sub-optimal generalization PGExplainer is natively designed for collectively explaining multiple instances. Using graph classification as an example, in Algorithm 2, i is the index of an instance (graph) to be explained. Since the index (i, j) is used to indicate an edge in Eq. (11), we don't expand Eq. (11) to avoid confusion. The source code of PGExplainer can be found in GitHub with the name "PGExplainer". We follow the experimental setting in GNNExplainer. "explanation accuracy" is not formally defined. We didn't report std for baselines because they don't have sampling processes in Baselines' stds are shown in the table below. PGExplainer is a general model compatible with different GNNs and diverse learning tasks. Besides, instead of edge-level important scores, they just calculate node-level important scores. We select a method "Gradient" in the CVPR paper which doesn't require The AUC scores on BA-2motifs and MUTAG are 0.773 and 0.653, The KDD paper mentioned just showed up (June 3). Second, it only provides model-level explanations without preserving the local fidelity. That's why we call it a global method. As discussed in [38], "local fidelity" requires an explanation PGExplainer to preserve the "local fidelity", at the same time, with a global view of the GNN model. GNNExplainer is a pioneer to provide explanations for GNN's predictions. We include a parameterized network to enable explainer a global view of the GNN model. PGExplainer is much more effective and efficient than the state-of-the-art method. As discussed in Appendix D.1, "PGExplainer
Neural Information Processing Systems
Feb-7-2025, 10:37:52 GMT