TY - GEN
T1 - Verify Deep Learning Models Ownership via Preset Embedding
AU - Yin, Wenxuan
AU - Qian, Haifeng
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - A well-trained deep neural network (DNNs) requires massive computing resources and data, therefore it belongs to the model owners' Intellectual Property (IP). Recent works have shown that the model can be stolen by the adversary without any training data or internal parameters of the model. Currently, there were some defense methods to resist it, by increasing the cost of model stealing attack or detecting the theft afterwards.In this paper, We propose a method to determine theft by detecting whether the victim's preset embedding exists in the adversary model. Firstly, we convert some training images into grayscale images as embedding and inject them to the training set. Then, we train a binary classifier to determine whether the model is stolen from the victim. The main intuition behind our approach is that the stolen model should contain embedded knowledge learned by the victim model. Our results demonstrate that our method is effective in defending against different types of model theft methods.
AB - A well-trained deep neural network (DNNs) requires massive computing resources and data, therefore it belongs to the model owners' Intellectual Property (IP). Recent works have shown that the model can be stolen by the adversary without any training data or internal parameters of the model. Currently, there were some defense methods to resist it, by increasing the cost of model stealing attack or detecting the theft afterwards.In this paper, We propose a method to determine theft by detecting whether the victim's preset embedding exists in the adversary model. Firstly, we convert some training images into grayscale images as embedding and inject them to the training set. Then, we train a binary classifier to determine whether the model is stolen from the victim. The main intuition behind our approach is that the stolen model should contain embedded knowledge learned by the victim model. Our results demonstrate that our method is effective in defending against different types of model theft methods.
KW - deep learning
KW - model stealing
KW - ownership verification
UR - https://www.scopus.com/pages/publications/85168097167
U2 - 10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00173
DO - 10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00173
M3 - 会议稿件
AN - SCOPUS:85168097167
T3 - Proceedings - 2022 IEEE SmartWorld, Ubiquitous Intelligence and Computing, Autonomous and Trusted Vehicles, Scalable Computing and Communications, Digital Twin, Privacy Computing, Metaverse, SmartWorld/UIC/ATC/ScalCom/DigitalTwin/PriComp/Metaverse 2022
SP - 1113
EP - 1118
BT - Proceedings - 2022 IEEE SmartWorld, Ubiquitous Intelligence and Computing, Autonomous and Trusted Vehicles, Scalable Computing and Communications, Digital Twin, Privacy Computing, Metaverse, SmartWorld/UIC/ATC/ScalCom/DigitalTwin/PriComp/Metaverse 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 IEEE SmartWorld, 19th IEEE International Conference on Ubiquitous Intelligence and Computing, 2022 IEEE International Conference on Autonomous and Trusted Vehicles Conference, 22nd IEEE International Conference on Scalable Computing and Communications, 2022 IEEE International Conference on Digital Twin, 8th IEEE International Conference on Privacy Computing and 2022 IEEE International Conference on Metaverse, SmartWorld/UIC/ATC/ScalCom/DigitalTwin/PriComp/Metaverse 2022
Y2 - 15 December 2022 through 18 December 2022
ER -