TY - GEN
T1 - QVIP
T2 - 37th IEEE/ACM International Conference on Automated Software Engineering, ASE 2022
AU - Zhang, Yedi
AU - Zhao, Zhe
AU - Chen, Guangke
AU - Song, Fu
AU - Zhang, Min
AU - Chen, Taolue
AU - Sun, Jun
N1 - Publisher Copyright:
© 2022 Owner/Author.
PY - 2022/9/19
Y1 - 2022/9/19
N2 - Deep learning has become a promising programming paradigm in software development, owing to its surprising performance in solving many challenging tasks. Deep neural networks (DNNs) are increasingly being deployed in practice, but are limited on resource-constrained devices owing to their demand for computational power. Quantization has emerged as a promising technique to reduce the size of DNNs with comparable accuracy as their floating-point numbered counterparts. The resulting quantized neural networks (QNNs) can be implemented energy-efficiently. Similar to their floating-point numbered counterparts, quality assurance techniques for QNNs, such as testing and formal verification, are essential but are currently less explored. In this work, we propose a novel and efficient formal verification approach for QNNs. In particular, we are the first to propose an encoding that reduces the verification problem of QNNs into the solving of integer linear constraints, which can be solved using off-the-shelf solvers. Our encoding is both sound and complete. We demonstrate the application of our approach on local robustness verification and maximum robustness radius computation. We implement our approach in a prototype tool QVIP and conduct a thorough evaluation. Experimental results on QNNs with different quantization bits confirm the effectiveness and efficiency of our approach, e.g., two orders of magnitude faster and able to solve more verification tasks in the same time limit than the state-of-the-art methods.
AB - Deep learning has become a promising programming paradigm in software development, owing to its surprising performance in solving many challenging tasks. Deep neural networks (DNNs) are increasingly being deployed in practice, but are limited on resource-constrained devices owing to their demand for computational power. Quantization has emerged as a promising technique to reduce the size of DNNs with comparable accuracy as their floating-point numbered counterparts. The resulting quantized neural networks (QNNs) can be implemented energy-efficiently. Similar to their floating-point numbered counterparts, quality assurance techniques for QNNs, such as testing and formal verification, are essential but are currently less explored. In this work, we propose a novel and efficient formal verification approach for QNNs. In particular, we are the first to propose an encoding that reduces the verification problem of QNNs into the solving of integer linear constraints, which can be solved using off-the-shelf solvers. Our encoding is both sound and complete. We demonstrate the application of our approach on local robustness verification and maximum robustness radius computation. We implement our approach in a prototype tool QVIP and conduct a thorough evaluation. Experimental results on QNNs with different quantization bits confirm the effectiveness and efficiency of our approach, e.g., two orders of magnitude faster and able to solve more verification tasks in the same time limit than the state-of-the-art methods.
KW - Quantized neural network
KW - formal verification
KW - integer linear programming
KW - robustness
UR - https://www.scopus.com/pages/publications/85144823147
U2 - 10.1145/3551349.3556916
DO - 10.1145/3551349.3556916
M3 - 会议稿件
AN - SCOPUS:85144823147
T3 - ACM International Conference Proceeding Series
BT - 37th IEEE/ACM International Conference on Automated Software Engineering, ASE 2022
A2 - Aehnelt, Mario
A2 - Kirste, Thomas
PB - Association for Computing Machinery
Y2 - 10 October 2022 through 14 October 2022
ER -