Bounding the Expected Robustness of Graph Neural Networks Subject to Node Feature Attacks
Abbahaddou, Yassine, Ennadir, Sofiane, Lutzeyer, Johannes F., Vazirgiannis, Michalis, Boström, Henrik
–arXiv.org Artificial Intelligence
Graph Neural Networks (GNNs) have demonstrated state-of-the-art performance in various graph representation learning tasks. Recently, studies revealed their vulnerability to adversarial attacks. In this work, we theoretically define the concept of expected robustness in the context of attributed graphs and relate it to the classical definition of adversarial robustness in the graph representation learning literature. Our definition allows us to derive an upper bound of the expected robustness of Graph Convolutional Networks (GCNs) and Graph Isomorphism Networks subject to node feature attacks. Building on these findings, we connect the expected robustness of GNNs to the orthonormality of their weight matrices and consequently propose an attack-independent, more robust variant of the GCN, called the Graph Convolutional Orthonormal Robust Networks (GCORNs). We further introduce a probabilistic method to estimate the expected robustness, which allows us to evaluate the effectiveness of GCORN on several real-world datasets. Experimental experiments showed that GCORN outperforms available defense methods. Our code is publicly available at: https://github.com/Sennadir/GCORN. Graph-structured data is prevalent in a wide range of domains, motivating therefore the development of neural network models that can operate on graphs, known as Graph Neural Networks (GNNs). GNNs have emerged as a powerful tool for learning node and graph representations. Many GNNs are instances of Message Passing Neural Networks (MPNNs) (Gilmer et al., 2017) such as Graph Isomorphism Networks (GIN)(Xu et al., 2019b) and Graph Convolutional Networks (GCN)(Kipf & Welling, 2017). These models have been successfully applied in real-world applications such as molecular design (Kearnes et al., 2016). In parallel to their success, it has been shown, particularly in the field of computer vision, that deep learning architectures can be susceptible to adversarial attacks (Goodfellow et al., 2015).
arXiv.org Artificial Intelligence
Apr-27-2024
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Government > Military (0.87)
- Information Technology > Security & Privacy (1.00)
- Technology: