Goto

Collaborating Authors

 gnnguard


GNNGUARD: DefendingGraphNeuralNetworks againstAdversarialAttacks

Neural Information Processing Systems

However, recent findings indicate that small, unnoticeable perturbations of graph structure can catastrophically reduce performance of even the strongest andmost popular Graph Neural Networks (GNNs).


GNNGuard: Defending Graph Neural Networks against Adversarial Attacks

Neural Information Processing Systems

Deep learning methods for graphs achieve remarkable performance on many tasks. However, despite the proliferation of such methods and their success, recent findings indicate that small, unnoticeable perturbations of graph structure can catastrophically reduce performance of even the strongest and most popular Graph Neural Networks (GNNs). Here, we develop GNNGuard, a general defense approach against a variety of training-time attacks that perturb the discrete graph structure. GNNGuard can be straightforwardly incorporated into any GNN. Its core principle is to detect and quantify the relationship between the graph structure and node features, if one exists, and then exploit that relationship to mitigate the negative effects of the attack.


Robustness in Text-Attributed Graph Learning: Insights, Trade-offs, and New Defenses

Lei, Runlin, Yi, Lu, He, Mingguo, Qiu, Pengyu, Wei, Zhewei, Liu, Yongchao, Hong, Chuntao

arXiv.org Artificial Intelligence

While Graph Neural Networks (GNNs) and Large Language Models (LLMs) are powerful approaches for learning on Text-Attributed Graphs (TAGs), a comprehensive understanding of their robustness remains elusive. Current evaluations are fragmented, failing to systematically investigate the distinct effects of textual and structural perturbations across diverse models and attack scenarios. To address these limitations, we introduce a unified and comprehensive framework to evaluate robustness in TAG learning. Our framework evaluates classical GNNs, robust GNNs (RGNNs), and GraphLLMs across ten datasets from four domains, under diverse text-based, structure-based, and hybrid perturbations in both poisoning and evasion scenarios. Our extensive analysis reveals multiple findings, among which three are particularly noteworthy: 1) models have inherent robustness trade-offs between text and structure, 2) the performance of GNNs and RGNNs depends heavily on the text encoder and attack type, and 3) GraphLLMs are particularly vulnerable to training data corruption. To overcome the identified trade-offs, we introduce SFT-auto, a novel framework that delivers superior and balanced robustness against both textual and structural attacks within a single model. Our work establishes a foundation for future research on TAG security and offers practical solutions for robust TAG learning in adversarial environments. Our code is available at: https://github.com/Leirunlin/TGRB.


Review for NeurIPS paper: GNNGuard: Defending Graph Neural Networks against Adversarial Attacks

Neural Information Processing Systems

Weaknesses: * The proposed approach is heuristic. The main downside of heuristic defenses (unlike certified defenses) is that they are often easily broken by an adaptive attacker. With this in mind a proper evaluation of a heuristic defense warrants a strong attempt to break it (by the authors) by proposing an attack that's tailored specifically for it. Without such evidence it is not clear whether this defense would be useful in practice. For example, it is relatively straightforward to add an additional term in Mettack's loss that encourages adversarial edges between nodes with similar representations.


Review for NeurIPS paper: GNNGuard: Defending Graph Neural Networks against Adversarial Attacks

Neural Information Processing Systems

Three reviewers participated in the discussion. The main concern was that the proposed method works only on graphs with homophily. Although the rebuttal gives an example where structural similarity of nodes can also be used to design defense strategies, it is still some sort of assumption on the graph. That said, some reviewers pointed out that this is not a deal-breaking limitation, since the assumptions are clearly stated in the paper. The reviewers also agreed that the proposed method is simple yet effective.


GNNGuard: Defending Graph Neural Networks against Adversarial Attacks

Neural Information Processing Systems

Deep learning methods for graphs achieve remarkable performance on many tasks. However, despite the proliferation of such methods and their success, recent findings indicate that small, unnoticeable perturbations of graph structure can catastrophically reduce performance of even the strongest and most popular Graph Neural Networks (GNNs). Here, we develop GNNGuard, a general defense approach against a variety of training-time attacks that perturb the discrete graph structure. GNNGuard can be straightforwardly incorporated into any GNN. Its core principle is to detect and quantify the relationship between the graph structure and node features, if one exists, and then exploit that relationship to mitigate the negative effects of the attack.