Towards More Practical Adversarial Attacks on Graph Neural Networks

Neural Information Processing Systems 

We demonstrate that the structural inductive biases of GNN models can be an effective source for this type of attacks. Specifically, by exploiting the connection between the backward propagation of GNNs and random walks, we show that the common gradient-based white-box attacks can be generalized to the black-box setting via the connection between the gradient and an importance score similar to PageRank.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found