A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and Applications

Zhang, Yi, Zhao, Yuying, Li, Zhaoqing, Cheng, Xueqi, Wang, Yu, Kotevska, Olivera, Yu, Philip S., Derr, Tyler

arXiv.org Artificial Intelligence 

Privacy attack is a popular and well-developed topic in various fields such as social network analysis, healthcare, finance, system, etc. [88], [89], [90]. During recent years, the surge of machine learning has provided powerful tools to solve many practical problems. However, data-driven approaches also threaten users' privacy due to the associated risks of data leakage and inference [85]. Consequently, a substantial amount of work has been devoted to investigate the vulnerabilities of ML models and the risks of privacy leakage [47]. A branch of privacy research is to develop privacy attack models, which has received much attention during the past few years. However, attack models with respect to GNNs have only been explored very recently because GNN techniques are relatively new compared with CNN/transformers in image/natural language processing(NLP) domains, and the irregular graph structure poses unique challenges to transfer existing attack techniques that are well-established in other domains. In this section, we summarize papers that have developed attack models specifically targeting GNNs. Figure 1: Illustrations of the four categories of privacy attack We classify the privacy attack models on GNN into models on graphs: a) Model extraction attacks (MEA); b) four categories (which are visualized in Figure 4): a) model Graph structure reconstruction (GSR); c) Attribute inference extraction attack (MEA), b) graph structure reconstruction attacks (AIA); and d) Membership inference attacks (MIA).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found