Safeguarding Graph Neural Networks against Topology Inference Attacks

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Graph Neural Networks (GNNs) have emerged as powerful models for learning from graph-structured data. However, their widespread adoption has raised serious privacy concerns. While prior research has primarily focused on edge-level privacy, a critical yet underexplored threat lies in topology privacy - the confidentiality of the graph's overall structure. In this work, we present a comprehensive study on topology privacy risks in GNNs, revealing their vulnerability to graph-level inference attacks. To this end, we propose a suite of Topology Inference Attacks (TIAs) that can reconstruct the structure of a target training graph using only black-box access to a GNN model. Our findings show that GNNs are highly susceptible to these attacks, and that existing edge-level differential privacy mechanisms are insufficient as they either fail to mitigate the risk or severely compromise model accuracy. To address this challenge, we introduce Private Graph Reconstruction (PGR), a novel defense framework designed to protect topology privacy while maintaining model accuracy. PGR is formulated as a bi-level optimization problem, where a synthetic training graph is iteratively generated using meta-gradients, and the GNN model is concurrently updated based on the evolving graph. Extensive experiments demonstrate that PGR significantly reduces topology leakage with minimal impact on model accuracy. Our code and full paper are available at https://github.com/JeffffffFu/PGR.

Original languageEnglish
Title of host publicationCCS 2025 - Proceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security
PublisherAssociation for Computing Machinery, Inc
Pages2144-2158
Number of pages15
ISBN (Electronic)9798400715259
DOIs
StatePublished - 22 Nov 2025
Event32nd ACM SIGSAC Conference on Computer and Communications Security, CCS 2025 - Taipei, Taiwan, Province of China
Duration: 13 Oct 202517 Oct 2025

Publication series

NameCCS 2025 - Proceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security

Conference

Conference32nd ACM SIGSAC Conference on Computer and Communications Security, CCS 2025
Country/TerritoryTaiwan, Province of China
CityTaipei
Period13/10/2517/10/25

Keywords

  • Graph Neural Network (GNN)
  • Topology privacy
  • privacy and security in machine learning
  • privacy inference attacks

Fingerprint

Dive into the research topics of 'Safeguarding Graph Neural Networks against Topology Inference Attacks'. Together they form a unique fingerprint.

Cite this