TY - GEN
T1 - Safeguarding Graph Neural Networks against Topology Inference Attacks
AU - Fu, Jie
AU - Hong, Yuan
AU - Chen, Zhili
AU - Wang, Wendy Hui
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/11/22
Y1 - 2025/11/22
N2 - Graph Neural Networks (GNNs) have emerged as powerful models for learning from graph-structured data. However, their widespread adoption has raised serious privacy concerns. While prior research has primarily focused on edge-level privacy, a critical yet underexplored threat lies in topology privacy - the confidentiality of the graph's overall structure. In this work, we present a comprehensive study on topology privacy risks in GNNs, revealing their vulnerability to graph-level inference attacks. To this end, we propose a suite of Topology Inference Attacks (TIAs) that can reconstruct the structure of a target training graph using only black-box access to a GNN model. Our findings show that GNNs are highly susceptible to these attacks, and that existing edge-level differential privacy mechanisms are insufficient as they either fail to mitigate the risk or severely compromise model accuracy. To address this challenge, we introduce Private Graph Reconstruction (PGR), a novel defense framework designed to protect topology privacy while maintaining model accuracy. PGR is formulated as a bi-level optimization problem, where a synthetic training graph is iteratively generated using meta-gradients, and the GNN model is concurrently updated based on the evolving graph. Extensive experiments demonstrate that PGR significantly reduces topology leakage with minimal impact on model accuracy. Our code and full paper are available at https://github.com/JeffffffFu/PGR.
AB - Graph Neural Networks (GNNs) have emerged as powerful models for learning from graph-structured data. However, their widespread adoption has raised serious privacy concerns. While prior research has primarily focused on edge-level privacy, a critical yet underexplored threat lies in topology privacy - the confidentiality of the graph's overall structure. In this work, we present a comprehensive study on topology privacy risks in GNNs, revealing their vulnerability to graph-level inference attacks. To this end, we propose a suite of Topology Inference Attacks (TIAs) that can reconstruct the structure of a target training graph using only black-box access to a GNN model. Our findings show that GNNs are highly susceptible to these attacks, and that existing edge-level differential privacy mechanisms are insufficient as they either fail to mitigate the risk or severely compromise model accuracy. To address this challenge, we introduce Private Graph Reconstruction (PGR), a novel defense framework designed to protect topology privacy while maintaining model accuracy. PGR is formulated as a bi-level optimization problem, where a synthetic training graph is iteratively generated using meta-gradients, and the GNN model is concurrently updated based on the evolving graph. Extensive experiments demonstrate that PGR significantly reduces topology leakage with minimal impact on model accuracy. Our code and full paper are available at https://github.com/JeffffffFu/PGR.
KW - Graph Neural Network (GNN)
KW - Topology privacy
KW - privacy and security in machine learning
KW - privacy inference attacks
UR - https://www.scopus.com/pages/publications/105023823908
U2 - 10.1145/3719027.3765173
DO - 10.1145/3719027.3765173
M3 - 会议稿件
AN - SCOPUS:105023823908
T3 - CCS 2025 - Proceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security
SP - 2144
EP - 2158
BT - CCS 2025 - Proceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security
PB - Association for Computing Machinery, Inc
T2 - 32nd ACM SIGSAC Conference on Computer and Communications Security, CCS 2025
Y2 - 13 October 2025 through 17 October 2025
ER -