TY - JOUR
T1 - EGAN
T2 - Encrypting GAN Models Based on Self-Adversarial
AU - Zhu, Yujie
AU - Li, Wei
AU - Jiang, Yuhang
AU - Huang, Yanrong
AU - Fang, Faming
N1 - Publisher Copyright:
© 2025 by the authors.
PY - 2025/12
Y1 - 2025/12
N2 - The increasing prevalence of deep learning models in industry has highlighted the critical need to protect the intellectual property (IP) of these models, especially generative adversarial networks (GANs) capable of synthesizing realistic data. Traditional IP protection methods, such as watermarking model parameters (white-box) or verifying outputs (black-box), are insufficient against non-public misappropriation. To address these limitations, we introduce EGAN (Encrypted GANs), which secures GAN models by embedding a novel self-adversarial mechanism. This mechanism is trained to actively maximize the feature divergence between authorized and unauthorized inputs, thereby intentionally corrupting the outputs from non-key inputs and preventing unauthorized operation. Our methodology utilizes key-based transformations applied to GAN inputs and incorporates a generator loss regularization term to enforce model protection without compromising performance. This technique is compatible with existing watermark-based verification methods. Extensive experimental evaluations reveal that EGAN maintains the generative capabilities of original GAN architectures, including DCGAN, SRGAN, and CycleGAN, while exhibiting robust resistance to common attack strategies such as fine-tuning. Compared with prior work, EGAN provides comprehensive IP protection by ensuring unauthorized users cannot achieve desired outcomes, thus safeguarding both the models and their generated data.
AB - The increasing prevalence of deep learning models in industry has highlighted the critical need to protect the intellectual property (IP) of these models, especially generative adversarial networks (GANs) capable of synthesizing realistic data. Traditional IP protection methods, such as watermarking model parameters (white-box) or verifying outputs (black-box), are insufficient against non-public misappropriation. To address these limitations, we introduce EGAN (Encrypted GANs), which secures GAN models by embedding a novel self-adversarial mechanism. This mechanism is trained to actively maximize the feature divergence between authorized and unauthorized inputs, thereby intentionally corrupting the outputs from non-key inputs and preventing unauthorized operation. Our methodology utilizes key-based transformations applied to GAN inputs and incorporates a generator loss regularization term to enforce model protection without compromising performance. This technique is compatible with existing watermark-based verification methods. Extensive experimental evaluations reveal that EGAN maintains the generative capabilities of original GAN architectures, including DCGAN, SRGAN, and CycleGAN, while exhibiting robust resistance to common attack strategies such as fine-tuning. Compared with prior work, EGAN provides comprehensive IP protection by ensuring unauthorized users cannot achieve desired outcomes, thus safeguarding both the models and their generated data.
KW - data security
KW - encryption
KW - GAN
KW - intellectual property
KW - model protection
UR - https://www.scopus.com/pages/publications/105025824260
U2 - 10.3390/math13244008
DO - 10.3390/math13244008
M3 - 文章
AN - SCOPUS:105025824260
SN - 2227-7390
VL - 13
JO - Mathematics
JF - Mathematics
IS - 24
M1 - 4008
ER -