TY - GEN
T1 - Fine-grained neural network abstraction for efficient formal verification
AU - Wen, Zhaosen
AU - Miao, Weikai
AU - Zhang, Min
N1 - Publisher Copyright:
© 2021 Knowledge Systems Institute Graduate School. All rights reserved.
PY - 2021
Y1 - 2021
N2 - The advance of deep learning makes it possible to empower safety-critical systems with intelligent capabilities. However, its intelligent component, i.e., deep neural network, is difficult to formally verify due to the large scale and intrinsic complexity of the verification problem. Abstraction has been proved to be an effective way of improving the scalability. A challenging problem in abstraction is that it is difficult to achieve a balance between the size reduced and output overestimation caused by abstraction. In this work, we propose an effective fine-grained approach to abstract neural networks. Our approach is fine-grained in that we identify four cases that should be abstracted independently under a certain neuron prioritization strategy. This allows us to merge more neurons in networks and meanwhile maintain a relatively low output overestimation. Experimental results show that our approach outperforms other existing abstraction approaches by significantly reducing the scale of target deep neural networks with small overestimation.
AB - The advance of deep learning makes it possible to empower safety-critical systems with intelligent capabilities. However, its intelligent component, i.e., deep neural network, is difficult to formally verify due to the large scale and intrinsic complexity of the verification problem. Abstraction has been proved to be an effective way of improving the scalability. A challenging problem in abstraction is that it is difficult to achieve a balance between the size reduced and output overestimation caused by abstraction. In this work, we propose an effective fine-grained approach to abstract neural networks. Our approach is fine-grained in that we identify four cases that should be abstracted independently under a certain neuron prioritization strategy. This allows us to merge more neurons in networks and meanwhile maintain a relatively low output overestimation. Experimental results show that our approach outperforms other existing abstraction approaches by significantly reducing the scale of target deep neural networks with small overestimation.
UR - https://www.scopus.com/pages/publications/85114279723
U2 - 10.18293/SEKE2021-071
DO - 10.18293/SEKE2021-071
M3 - 会议稿件
AN - SCOPUS:85114279723
T3 - Proceedings of the International Conference on Software Engineering and Knowledge Engineering, SEKE
SP - 144
EP - 149
BT - Proceedings - SEKE 2021
PB - Knowledge Systems Institute Graduate School
T2 - 33rd International Conference on Software Engineering and Knowledge Engineering, SEKE 2021
Y2 - 1 July 2021 through 10 July 2021
ER -