TY - JOUR
T1 - A Secure GNN Training Framework for Partially Observable Graph
AU - An, Dongdong
AU - Yang, Yi
AU - Liu, Wenyan
AU - Zhao, Qin
AU - Liu, Jing
AU - Qi, Hongda
AU - Lian, Jie
N1 - Publisher Copyright:
© 2024 by the authors.
PY - 2024/7
Y1 - 2024/7
N2 - Graph Neural Networks (GNNs) are susceptible to adversarial injection attacks, potentially compromising the model integrity, reducing accuracy, and posing security risks. However, most of the current countermeasures focus on enhancing the robustness of GNNs rather than directly addressing these specific attacks. The challenge stems from the difficulty of protecting all nodes in the entire graph and the agnostic of the attackers. Therefore, we propose a secure training strategy for GNNs that counters the vulnerability to adversarial injection attacks and overcomes the obstacle of partial observability in existing defense mechanisms—where defenders are only aware of the graph’s post-attack structure and node attributes, without the identification of compromised nodes. Our strategy not only protects specific nodes but also extends security to all nodes in the graph. We model the graph security issues as a Partially Observable Markov Decision Process (POMDP) and use Graph Convolutional Memory (GCM) to transform the observations of a POMDP into states with temporal memory proceeding to use reinforcement learning to solve for the optimal defensive strategy. Finally, we prevent learning from malicious nodes by limiting the convolutional scope, thus defending against adversarial injection attacks. Our defense method is evaluated on five datasets, achieving an accuracy range of 74% to 86.7%, which represents an enhancement of approximately 5.09% to 100.26% over post-attack accuracies. Compared with various traditional experimental models, our method shows an accuracy improvement ranging from 0.82% to 100.26%.
AB - Graph Neural Networks (GNNs) are susceptible to adversarial injection attacks, potentially compromising the model integrity, reducing accuracy, and posing security risks. However, most of the current countermeasures focus on enhancing the robustness of GNNs rather than directly addressing these specific attacks. The challenge stems from the difficulty of protecting all nodes in the entire graph and the agnostic of the attackers. Therefore, we propose a secure training strategy for GNNs that counters the vulnerability to adversarial injection attacks and overcomes the obstacle of partial observability in existing defense mechanisms—where defenders are only aware of the graph’s post-attack structure and node attributes, without the identification of compromised nodes. Our strategy not only protects specific nodes but also extends security to all nodes in the graph. We model the graph security issues as a Partially Observable Markov Decision Process (POMDP) and use Graph Convolutional Memory (GCM) to transform the observations of a POMDP into states with temporal memory proceeding to use reinforcement learning to solve for the optimal defensive strategy. Finally, we prevent learning from malicious nodes by limiting the convolutional scope, thus defending against adversarial injection attacks. Our defense method is evaluated on five datasets, achieving an accuracy range of 74% to 86.7%, which represents an enhancement of approximately 5.09% to 100.26% over post-attack accuracies. Compared with various traditional experimental models, our method shows an accuracy improvement ranging from 0.82% to 100.26%.
KW - Graph Neural Networks
KW - adversarial injection attack
KW - partial observability
KW - reinforcement learning
UR - https://www.scopus.com/pages/publications/85199657114
U2 - 10.3390/electronics13142721
DO - 10.3390/electronics13142721
M3 - 文章
AN - SCOPUS:85199657114
SN - 2079-9292
VL - 13
JO - Electronics (Switzerland)
JF - Electronics (Switzerland)
IS - 14
M1 - 2721
ER -