TY - JOUR
T1 - Nimbus
T2 - 38th Conference on Neural Information Processing Systems, NeurIPS 2024
AU - Li, Zhengyi
AU - Yang, Kang
AU - Tan, Jin
AU - Lu, Wen Jie
AU - Wu, Haoqi
AU - Wang, Xiao
AU - Yu, Yu
AU - Zhao, Derun
AU - Zheng, Yancheng
AU - Guo, Minyi
AU - Leng, Jingwen
N1 - Publisher Copyright:
© 2024 Neural information processing systems foundation. All rights reserved.
PY - 2024
Y1 - 2024
N2 - Transformer models have gained significant attention due to their power in machine learning tasks. Their extensive deployment has raised concerns about the potential leakage of sensitive information during inference. However, when being applied to Transformers, existing approaches based on secure two-party computation (2PC) bring about efficiency limitations in two folds: (1) resource-intensive matrix multiplications in linear layers, and (2) complex non-linear activation functions like GELU and Softmax. This work presents a new two-party inference framework Nimbus for Transformer models. For the linear layer, we propose a new 2PC paradigm along with an encoding approach to securely compute matrix multiplications based on an outer-product insight, which achieves 2.9× ∼ 12.5× performance improvements compared to the state-of-the-art (SOTA) protocol. For the non-linear layer, through a new observation of utilizing the input distribution, we propose an approach of low-degree polynomial approximation for GELU and Softmax, which improves the performance of the SOTA polynomial approximation by 2.9× ∼ 4.0×, where the average accuracy loss of our approach is 0.08% compared to the non-2PC inference without privacy. Compared with the SOTA two-party inference, Nimbus improves the end-to-end performance of BERTbase inference by 2.7× ∼ 4.7× across different network settings.
AB - Transformer models have gained significant attention due to their power in machine learning tasks. Their extensive deployment has raised concerns about the potential leakage of sensitive information during inference. However, when being applied to Transformers, existing approaches based on secure two-party computation (2PC) bring about efficiency limitations in two folds: (1) resource-intensive matrix multiplications in linear layers, and (2) complex non-linear activation functions like GELU and Softmax. This work presents a new two-party inference framework Nimbus for Transformer models. For the linear layer, we propose a new 2PC paradigm along with an encoding approach to securely compute matrix multiplications based on an outer-product insight, which achieves 2.9× ∼ 12.5× performance improvements compared to the state-of-the-art (SOTA) protocol. For the non-linear layer, through a new observation of utilizing the input distribution, we propose an approach of low-degree polynomial approximation for GELU and Softmax, which improves the performance of the SOTA polynomial approximation by 2.9× ∼ 4.0×, where the average accuracy loss of our approach is 0.08% compared to the non-2PC inference without privacy. Compared with the SOTA two-party inference, Nimbus improves the end-to-end performance of BERTbase inference by 2.7× ∼ 4.7× across different network settings.
UR - https://www.scopus.com/pages/publications/105000537285
M3 - 会议文章
AN - SCOPUS:105000537285
SN - 1049-5258
VL - 37
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
Y2 - 9 December 2024 through 15 December 2024
ER -