Single-Node Attack for Fooling Graph Neural Networks
Finkelshtein, Ben, Baskin, Chaim, Zheltonozhskii, Evgenii, Alon, Uri
–arXiv.org Artificial Intelligence
Graph neural networks (GNNs) have shown broad applicability in a variety of domains. Some of these domains, such as social networks and product recommendations, are fertile ground for malicious users and behavior. In this paper, we show that GNNs are vulnerable to the extremely limited scenario of a single-node adversarial example, where the node cannot be picked by the attacker. That is, an attacker can force the GNN to classify any target node to a chosen label by only slightly perturbing another single arbitrary node in the graph, even when not being able to pick that specific attacker node. When the adversary is allowed to pick a specific attacker node, the attack is even more effective. We show that this attack is effective across various GNN types, such as GraphSAGE, GCN, GAT, and GIN, across a variety of real-world datasets, and as a targeted and a non-targeted attack. Therefore, GNNs are applicable for a variety of real-world structured data. While most work in this field has focused on improving the accuracy of GNNs and applying them to a growing number of domains, only a few past works have explored the vulnerability of GNNs to adversarial examples.
arXiv.org Artificial Intelligence
Nov-6-2020
- Country:
- Asia > Middle East > Israel (0.14)
- Genre:
- Research Report (0.64)
- Industry:
- Information Technology > Security & Privacy (0.69)
- Technology: