An Efficient Private GPT Never Autoregressively Decodes

  • Zhengyi Li
  • , Yue Guan
  • , Kang Yang*
  • , Yu Feng
  • , Ning Liu
  • , Yu Yu*
  • , Jingwen Leng*
  • , Minyi Guo
  • *Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

The wide deployment of the generative pre-trained transformer (GPT) has raised privacy concerns for both clients and servers. While cryptographic primitives can be employed for secure GPT inference to protect the privacy of both parties, they introduce considerable performance overhead. To accelerate secure inference, this study proposes a public decoding and secure verification approach that utilizes public GPT models, motivated by the observation that securely decoding one and multiple tokens takes a similar latency. The client uses the public model to generate a set of tokens, which are then securely verified by the private model for acceptance. The efficiency of our approach depends on the acceptance ratio of tokens proposed by the public model, which we improve from two aspects: (1) a private sampling protocol optimized for cryptographic primitives and (2) model alignment using knowledge distillation. Our approach improves the efficiency of secure decoding while maintaining the same level of privacy and generation quality as standard secure decoding. Experiments demonstrate a 2.1× ∼ 6.0× speedup compared to standard decoding across three pairs of public-private models and different network conditions.

Original languageEnglish
Pages (from-to)34410-34428
Number of pages19
JournalProceedings of Machine Learning Research
Volume267
StatePublished - 2025
Externally publishedYes
Event42nd International Conference on Machine Learning, ICML 2025 - Vancouver, Canada
Duration: 13 Jul 202519 Jul 2025

Fingerprint

Dive into the research topics of 'An Efficient Private GPT Never Autoregressively Decodes'. Together they form a unique fingerprint.

Cite this