TY - JOUR
T1 - Lightweight Privacy-Preserving Training and Evaluation for Discretized Neural Networks
AU - Chen, Jialu
AU - Zhou, Jun
AU - Cao, Zhenfu
AU - Vasilakos, Athanasios V.
AU - Dong, Xiaolei
AU - Choo, Kim Kwang Raymond
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2020/4
Y1 - 2020/4
N2 - Machine learning, particularly the neural network (NN), is extensively exploited in dizzying applications. In order to reduce the burden of computing for resource-constrained clients, a large number of historical private datasets are required to be outsourced to the semi-trusted or malicious cloud for model training and evaluation. To achieve privacy preservation, most of the existing work either exploited the technique of public key fully homomorphic encryption (FHE) resulting in considerable computational cost and ciphertext expansion, or secure multiparty computation (SMC) requiring multiple rounds of interactions between user and cloud. To address these issues, in this article, a lightweight privacy-preserving model training and evaluation scheme LPTE for discretized NNs (DiNNs) is proposed. First, we put forward an efficient single key fully homomorphic data encapsulation mechanism (SFH-DEM) without exploiting public key FHE. Based on SFH-DEM, a series of atomic calculations over the encrypted domain, including multivariate polynomial, nonlinear activation function, gradient function, and maximum operations are devised as building blocks. Furthermore, a lightweight privacy-preserving model training and evaluation scheme LPTE for DiNNs is proposed, which can also be extended to convolutional NN. Finally, we give the formal security proofs for dataset privacy, model training privacy, and model evaluation privacy under the semi-honest environment and implement the experiment on real dataset MNIST for recognizing handwritten numbers in DiNN to demonstrate the high efficiency and accuracy of our proposed LPTE.
AB - Machine learning, particularly the neural network (NN), is extensively exploited in dizzying applications. In order to reduce the burden of computing for resource-constrained clients, a large number of historical private datasets are required to be outsourced to the semi-trusted or malicious cloud for model training and evaluation. To achieve privacy preservation, most of the existing work either exploited the technique of public key fully homomorphic encryption (FHE) resulting in considerable computational cost and ciphertext expansion, or secure multiparty computation (SMC) requiring multiple rounds of interactions between user and cloud. To address these issues, in this article, a lightweight privacy-preserving model training and evaluation scheme LPTE for discretized NNs (DiNNs) is proposed. First, we put forward an efficient single key fully homomorphic data encapsulation mechanism (SFH-DEM) without exploiting public key FHE. Based on SFH-DEM, a series of atomic calculations over the encrypted domain, including multivariate polynomial, nonlinear activation function, gradient function, and maximum operations are devised as building blocks. Furthermore, a lightweight privacy-preserving model training and evaluation scheme LPTE for DiNNs is proposed, which can also be extended to convolutional NN. Finally, we give the formal security proofs for dataset privacy, model training privacy, and model evaluation privacy under the semi-honest environment and implement the experiment on real dataset MNIST for recognizing handwritten numbers in DiNN to demonstrate the high efficiency and accuracy of our proposed LPTE.
KW - Discretized neural networks (NNs)
KW - efficiency
KW - privacy-preserving
KW - secure outsourced computation
UR - https://www.scopus.com/pages/publications/85083744687
U2 - 10.1109/JIOT.2019.2942165
DO - 10.1109/JIOT.2019.2942165
M3 - 文章
AN - SCOPUS:85083744687
SN - 2327-4662
VL - 7
SP - 2663
EP - 2678
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 4
M1 - 8843956
ER -