跳到主要导航 跳到搜索 跳到主要内容

DPQ: dynamic pseudo-mean mixed-precision quantization for pruned neural network

  • Songwen Pei*
  • , Jiyao Wang
  • , Bingxue Zhang
  • , Wei Qin
  • , Hai Xue
  • , Xiaochun Ye
  • , Mingsong Chen
  • *此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

The ever-increasing layers and hyper-parameters of deep neural network are continuously growing to generate large-scale network by training huge masses of data. However, it is difficult to deploy deep neural network on resource-constrained edge devices. Network mixed-precision quantization is a challenging way to prune and compress deep neural network models while discovering the optimal bit width for each layer. To solve the big challenge, we therefore propose the dynamic pseudo-mean mixed-precision quantization (DPQ) by introducing two-bit scaling factors to compensate errors of quantization. Furthermore, the activation quantization named random parameters clipping (RPC) is proposed. RPC adopts partial activation quantization to reduce loss of accuracy. Therefore, DPQ can dynamically adjust the bit precision of weight quantization according to the distribution of weights. It results in a quantification scheme with strong robustness compared to previous methods. Extensive experiments demonstrate that DPQ achieves 15.43× compression rate of ResNet20 on CIFAR-10 dataset with 0.22% increase in accuracy, and 35.25× compression rate of Resnet56 on SVHN dataset with 0.12% increase in accuracy.

源语言英语
页(从-至)4099-4112
页数14
期刊Machine Learning
113
7
DOI
出版状态已出版 - 7月 2024

指纹

探究 'DPQ: dynamic pseudo-mean mixed-precision quantization for pruned neural network' 的科研主题。它们共同构成独一无二的指纹。

引用此