TY - JOUR
T1 - Denoising Structure against Adversarial Attacks on Graph Representation Learning
AU - Chen, Na
AU - Li, Ping
AU - Huang, Jincheng
AU - Zhang, Kai
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s). Publication rights licensed to ACM.
PY - 2025/4/15
Y1 - 2025/4/15
N2 - Despite their excellent performance in graph representation learning, graph convolutional networks have been proved to be vulnerable to adversarial perturbations on the connectivity between nodes in an unnoticed manner. In this work, by looking into the impacts of adversarial attacks on graph data, we empirically find that the dominant edge-addition attacks generally increase the heterophily between connected nodes, which will fool the transductive inference models on node classification task. To defend against such attacks, we develop a Two-Stage Denoising (TSD) method that aims at removing possible malicious edges so as to mitigate the heterophily issue introduced by attacks. In particular, after a rough removal of the links that have quite low feature similarity, our method further spots the potentially heterophilous links by predicting node labels with a multi-view labeling consensus. This design is based on assumption that if the label predictions for the same node from two different views of a graph data are consistent, then we have a high chance to acquire the reliable labeling. The experiments demonstrate that by denoising a graph this way, the robustness of graph convolutional networks on node classification task is remarkably improved, compared to several strong competitive robust graph neural network models.
AB - Despite their excellent performance in graph representation learning, graph convolutional networks have been proved to be vulnerable to adversarial perturbations on the connectivity between nodes in an unnoticed manner. In this work, by looking into the impacts of adversarial attacks on graph data, we empirically find that the dominant edge-addition attacks generally increase the heterophily between connected nodes, which will fool the transductive inference models on node classification task. To defend against such attacks, we develop a Two-Stage Denoising (TSD) method that aims at removing possible malicious edges so as to mitigate the heterophily issue introduced by attacks. In particular, after a rough removal of the links that have quite low feature similarity, our method further spots the potentially heterophilous links by predicting node labels with a multi-view labeling consensus. This design is based on assumption that if the label predictions for the same node from two different views of a graph data are consistent, then we have a high chance to acquire the reliable labeling. The experiments demonstrate that by denoising a graph this way, the robustness of graph convolutional networks on node classification task is remarkably improved, compared to several strong competitive robust graph neural network models.
KW - Adversarial Attacks
KW - Graph Convolutional Networks
KW - Node classification
KW - Robustness
UR - https://www.scopus.com/pages/publications/105009162196
U2 - 10.1145/3714428
DO - 10.1145/3714428
M3 - 文章
AN - SCOPUS:105009162196
SN - 2157-6904
VL - 16
JO - ACM Transactions on Intelligent Systems and Technology
JF - ACM Transactions on Intelligent Systems and Technology
IS - 3
M1 - 53
ER -