Denoising Structure against Adversarial Attacks on Graph Representation Learning

  • Na Chen
  • , Ping Li*
  • , Jincheng Huang
  • , Kai Zhang
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Despite their excellent performance in graph representation learning, graph convolutional networks have been proved to be vulnerable to adversarial perturbations on the connectivity between nodes in an unnoticed manner. In this work, by looking into the impacts of adversarial attacks on graph data, we empirically find that the dominant edge-addition attacks generally increase the heterophily between connected nodes, which will fool the transductive inference models on node classification task. To defend against such attacks, we develop a Two-Stage Denoising (TSD) method that aims at removing possible malicious edges so as to mitigate the heterophily issue introduced by attacks. In particular, after a rough removal of the links that have quite low feature similarity, our method further spots the potentially heterophilous links by predicting node labels with a multi-view labeling consensus. This design is based on assumption that if the label predictions for the same node from two different views of a graph data are consistent, then we have a high chance to acquire the reliable labeling. The experiments demonstrate that by denoising a graph this way, the robustness of graph convolutional networks on node classification task is remarkably improved, compared to several strong competitive robust graph neural network models.

Original languageEnglish
Article number53
JournalACM Transactions on Intelligent Systems and Technology
Volume16
Issue number3
DOIs
StatePublished - 15 Apr 2025

Keywords

  • Adversarial Attacks
  • Graph Convolutional Networks
  • Node classification
  • Robustness

Fingerprint

Dive into the research topics of 'Denoising Structure against Adversarial Attacks on Graph Representation Learning'. Together they form a unique fingerprint.

Cite this