TY - JOUR
T1 - On Probabilistic Truncation in Privacy-preserving Machine Learning
AU - Zhou, Lijing
AU - Zhang, Bingsheng
AU - Wang, Ziyu
AU - Lu, Tianpei
AU - Song, Qingrui
AU - Zhang, Su
AU - Cui, Hongrui
AU - Yu, Yu
N1 - Publisher Copyright:
Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2025/4/11
Y1 - 2025/4/11
N2 - Probabilistic truncation has been widely used in a broad range of privacy-preserving machine learning (PPML) platforms, such as EdaBits (Crypto 20), ABY 2.0 (Usenix 21), Crypten (NIPS 21), Piranha-Falcon (Usenix 22), and Bicoptor (S&P 23), etc. In this work, we examine the problems of common probabilistic truncation protocols in PPML, and propose solutions from the perspectives of accuracy and efficiency. With regard to accuracy, we found the recommended precision parameters in many existing works are incorrect, leading to extremely low inference accuracy. We conducted a thorough analysis of their open-source code and found that their errors were mainly caused by simplified implementation; more specifically, random numbers are not correctly sampled in probabilistic truncation protocols. Based on this, we provide a detailed theoretical analysis to validate our views. With regard to efficiency, we identify limitations in the state-of-the-art secure comparison, Bicoptor’s (S&P 2023) DReLU protocol, which relies on the probabilistic truncation and is heavily constrained by the security parameter to eliminate errors, significantly impacting its performance. To address these challenges, we introduce a non-interactive deterministic truncation technique, replacing the original probabilistic truncation. Additionally, we propose a new technique for speeding up the ReLU/DReLU evaluation, which can be applied to the other non-linear functions as well. When the input size of DReLU is reduced to 7 bits, we can speed up approximately 5x the ReLU protocols w.r.t. ABY3, ABY 2.0, EdaBits, and Bicoptor without compromising model accuracy. The improved protocol can complete a ReLU evaluation within 2 rounds and 704 bits overall communication when the input/output is secretly shared over the 64-bit ring, which yields a 92% communication reduction on original Bicoptor. Compared to existing PPML platforms with GPU acceleration, our benchmark indicates a 10x improvement in the DReLU protocol, and a 6x improvement in the ReLU protocol over Piranha-Falcon and a 3.7x improvement over Bicoptor. As a result, the overall PPML model inference could be sped up by 3-4 times.
AB - Probabilistic truncation has been widely used in a broad range of privacy-preserving machine learning (PPML) platforms, such as EdaBits (Crypto 20), ABY 2.0 (Usenix 21), Crypten (NIPS 21), Piranha-Falcon (Usenix 22), and Bicoptor (S&P 23), etc. In this work, we examine the problems of common probabilistic truncation protocols in PPML, and propose solutions from the perspectives of accuracy and efficiency. With regard to accuracy, we found the recommended precision parameters in many existing works are incorrect, leading to extremely low inference accuracy. We conducted a thorough analysis of their open-source code and found that their errors were mainly caused by simplified implementation; more specifically, random numbers are not correctly sampled in probabilistic truncation protocols. Based on this, we provide a detailed theoretical analysis to validate our views. With regard to efficiency, we identify limitations in the state-of-the-art secure comparison, Bicoptor’s (S&P 2023) DReLU protocol, which relies on the probabilistic truncation and is heavily constrained by the security parameter to eliminate errors, significantly impacting its performance. To address these challenges, we introduce a non-interactive deterministic truncation technique, replacing the original probabilistic truncation. Additionally, we propose a new technique for speeding up the ReLU/DReLU evaluation, which can be applied to the other non-linear functions as well. When the input size of DReLU is reduced to 7 bits, we can speed up approximately 5x the ReLU protocols w.r.t. ABY3, ABY 2.0, EdaBits, and Bicoptor without compromising model accuracy. The improved protocol can complete a ReLU evaluation within 2 rounds and 704 bits overall communication when the input/output is secretly shared over the 64-bit ring, which yields a 92% communication reduction on original Bicoptor. Compared to existing PPML platforms with GPU acceleration, our benchmark indicates a 10x improvement in the DReLU protocol, and a 6x improvement in the ReLU protocol over Piranha-Falcon and a 3.7x improvement over Bicoptor. As a result, the overall PPML model inference could be sped up by 3-4 times.
UR - https://www.scopus.com/pages/publications/105004003661
U2 - 10.1609/aaai.v39i21.34458
DO - 10.1609/aaai.v39i21.34458
M3 - 会议文章
AN - SCOPUS:105004003661
SN - 2159-5399
VL - 39
SP - 22955
EP - 22964
JO - Proceedings of the AAAI Conference on Artificial Intelligence
JF - Proceedings of the AAAI Conference on Artificial Intelligence
IS - 21
T2 - 39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025
Y2 - 25 February 2025 through 4 March 2025
ER -