AdvSGM: Differentially Private Graph Learning via Adversarial Skip-gram Model
Zhang, Sen, Ye, Qingqing, Hu, Haibo, Xu, Jianliang
–arXiv.org Artificial Intelligence
--The skip-gram model (SGM), which employs a neural network to generate node vectors, serves as the basis for numerous popular graph embedding techniques. However, since the training datasets contain sensitive linkage information, the parameters of a released SGM may encode private information and pose significant privacy risks. Differential privacy (DP) is a rigorous standard for protecting individual privacy in data analysis. Nevertheless, when applying differential privacy to skip-gram in graphs, it becomes highly challenging due to the complex link relationships, which potentially result in high sensitivity and necessitate substantial noise injection. T o tackle this challenge, we present AdvSGM, a differentially private skip-gram for graphs via adversarial training. Our core idea is to leverage adversarial training to privatize skip-gram while improving its utility. T owards this end, we develop a novel adversarial training module by devising two optimizable noise terms that correspond to the parameters of a skip-gram. By fine-tuning the weights between modules within AdvSGM, we can achieve differentially private gradient updates without additional noise injection. Extensive experimental results on six real-world graph datasets show that AdvSGM preserves high data utility across different downstream tasks. Graph embedding, which has attracted increasing research attention, represents nodes by low-dimensional vectors while preserving the inherent properties and structures of the graph. In this way, well-studied machine learning algorithms can be easily applied for further mining tasks like clustering, classification, and prediction. Skip-gram models (SGMs) are a popular class of graph embedding models thanks to their simplicity and effectiveness, including DeepWalk [1], LINE [2], and node2vec [3]. However, SGMs, which capture not only general data characteristics but also specific details about individual data, are vulnerable to adversarial attacks, particularly user-linkage attacks [4] that exploit the linkage information between nodes to infer whether an individual is present in the training dataset. Therefore, node embeddings need to be sanitized with privacy guarantees before they can be released to the public. Differential privacy (DP) [5] is a well studied statistical privacy model recognized for its rigorous mathematical underpinnings. In this paper, we study the problem of achieving privacy-preserving skip-gram for graphs under differential privacy.
arXiv.org Artificial Intelligence
Mar-27-2025