8fb134f258b1f7865a6ab2d935a897c9-Supplemental.pdf

Neural Information Processing Systems 

In this section, we analyze the vanilla gradient-based explainers and GNNExplainer [24] under the explanation model framework. The proof that this explanation method falls into the class ofadditive feature attribution methods is quite straight-forward. TheconditionG S indicates thattherealization of G must be consistent with the realization of subgraphS. Thus, GNNExplainer would fail to explain predictions of thosemodels. In Figure 1, we provide an example illustrating the impact of theno-child constraint (3) onto the PGMexplanation. However, the constraint changes the edges in the Bayesian network.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found