Watermarking Graph Neural Networks based on Backdoor Attacks
Xu, Jing, Koffas, Stefanos, Ersoy, Oguzhan, Picek, Stjepan
–arXiv.org Artificial Intelligence
Graph Neural Networks (GNNs) have achieved promising performance in various real-world applications. Building a powerful GNN model is not a trivial task, as it requires a large amount of training data, powerful computing resources, and human expertise in fine-tuning the model. Moreover, with the development of adversarial attacks, e.g., model stealing attacks, GNNs raise challenges to model authentication. To avoid copyright infringement on GNNs, verifying the ownership of the GNN models is necessary. This paper presents a watermarking framework for GNNs for both graph and node classification tasks. We 1) design two strategies to generate watermarked data for the graph classification task and one for the node classification task, 2) embed the watermark into the host model through training to obtain the watermarked GNN model, and 3) verify the ownership of the suspicious model in a black-box setting. The experiments show that our framework can verify the ownership of GNN models with a very high probability (up to $99\%$) for both tasks. Finally, we experimentally show that our watermarking approach is robust against a state-of-the-art model extraction technique and four state-of-the-art defenses against backdoor attacks.
arXiv.org Artificial Intelligence
Nov-13-2022
- Country:
- Europe > Netherlands (0.28)
- Genre:
- Research Report
- Experimental Study (0.46)
- New Finding (0.46)
- Promising Solution (0.48)
- Research Report
- Industry:
- Technology: