Privacy Leakage in Privacy-Preserving Neural Network Inference

  • Mengqi Wei
  • , Wenxing Zhu
  • , Liangkun Cui
  • , Xiangxue Li*
  • , Qiang Li
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Scopus citations

Abstract

The community has seen many attempts to secure machine learning algorithms from multi-party computation or other cryptographic primitives. An interesting 3-party framework (SCSDF hereafter) for privacy-preserving neural network inference was presented at ESORICS 2020. SCSDF defines several protocols for non-linear activation functions including ReLU, Sigmoid, etc. In particular, these protocols reckon on a protocol DReLU (derivative computation for ReLU function) they proposed as a building block. All protocols are claimed secure (against one single semi-honest corruption and against one malicious corruption). Unfortunately, the paper shows that there exists grievous privacy leakage of private inputs during SCSDF executions. This would completely destroy the framework security. We first give detailed cryptanalysis on SCSDF from the perspective of the real-ideal simulation paradigm and indicate that these claimed-secure protocols do not meet the underlying security model. We then go into particular steps in SCSDF and demonstrate that the signs of input data would be inevitably revealed to the (either semi-honest or malicious) third party responsible for assisting protocol executions. To show such leakage more explicitly, we perform plenteous experiment evaluations on the MNIST dataset, the CIFAR-10 dataset, and CFD (Chicago Face Database) for both ReLU and Sigmoid non-linear activation functions. All experiments succeed in disclosing original private data of the data owner in the inference process. Potential countermeasures are recommended and demonstrated as well.

Original languageEnglish
Title of host publicationComputer Security – ESORICS 2022 - 27th European Symposium on Research in Computer Security, Proceedings
EditorsVijayalakshmi Atluri, Roberto Di Pietro, Christian D. Jensen, Weizhi Meng
PublisherSpringer Science and Business Media Deutschland GmbH
Pages133-152
Number of pages20
ISBN (Print)9783031171390
DOIs
StatePublished - 2022
Event27th European Symposium on Research in Computer Security, ESORICS 2022 - Hybrid, Copenhagen, Denmark
Duration: 26 Sep 202230 Sep 2022

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13554 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference27th European Symposium on Research in Computer Security, ESORICS 2022
Country/TerritoryDenmark
CityHybrid, Copenhagen
Period26/09/2230/09/22

Keywords

  • Multi-party computation
  • Neural network inference
  • Privacy leakage
  • Privacy-preserving machine learning

Fingerprint

Dive into the research topics of 'Privacy Leakage in Privacy-Preserving Neural Network Inference'. Together they form a unique fingerprint.

Cite this