TY - GEN
T1 - Taming Reachability Analysis of DNN-Controlled Systems via Abstraction-Based Training
AU - Tian, Jiaxu
AU - Zhi, Dapeng
AU - Liu, Si
AU - Wang, Peixin
AU - Katz, Guy
AU - Zhang, Min
N1 - Publisher Copyright:
© 2024, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2024
Y1 - 2024
N2 - The intrinsic complexity of deep neural networks (DNNs) makes it challenging to verify not only the networks themselves but also the hosting DNN-controlled systems. Reachability analysis of these systems faces the same challenge. Existing approaches rely on over-approximating DNNs using simpler polynomial models. However, they suffer from low efficiency and large overestimation, and are restricted to specific types of DNNs. This paper presents a novel abstraction-based approach to bypass the crux of over-approximating DNNs in reachability analysis. Specifically, we extend conventional DNNs by inserting an additional abstraction layer, which abstracts a real number to an interval for training. The inserted abstraction layer ensures that the values represented by an interval are indistinguishable to the network for both training and decision-making. Leveraging this, we devise the first black-box reachability analysis approach for DNN-controlled systems, where trained DNNs are only queried as black-box oracles for the actions on abstract states. Our approach is sound, tight, efficient, and agnostic to any DNN type and size. The experimental results on a wide range of benchmarks show that the DNNs trained by using our approach exhibit comparable performance, while the reachability analysis of the corresponding systems becomes more amenable with significant tightness and efficiency improvement over the state-of-the-art white-box approaches.
AB - The intrinsic complexity of deep neural networks (DNNs) makes it challenging to verify not only the networks themselves but also the hosting DNN-controlled systems. Reachability analysis of these systems faces the same challenge. Existing approaches rely on over-approximating DNNs using simpler polynomial models. However, they suffer from low efficiency and large overestimation, and are restricted to specific types of DNNs. This paper presents a novel abstraction-based approach to bypass the crux of over-approximating DNNs in reachability analysis. Specifically, we extend conventional DNNs by inserting an additional abstraction layer, which abstracts a real number to an interval for training. The inserted abstraction layer ensures that the values represented by an interval are indistinguishable to the network for both training and decision-making. Leveraging this, we devise the first black-box reachability analysis approach for DNN-controlled systems, where trained DNNs are only queried as black-box oracles for the actions on abstract states. Our approach is sound, tight, efficient, and agnostic to any DNN type and size. The experimental results on a wide range of benchmarks show that the DNNs trained by using our approach exhibit comparable performance, while the reachability analysis of the corresponding systems becomes more amenable with significant tightness and efficiency improvement over the state-of-the-art white-box approaches.
UR - https://www.scopus.com/pages/publications/85181977551
U2 - 10.1007/978-3-031-50521-8_4
DO - 10.1007/978-3-031-50521-8_4
M3 - 会议稿件
AN - SCOPUS:85181977551
SN - 9783031505201
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 73
EP - 97
BT - Verification, Model Checking, and Abstract Interpretation - 25th International Conference, VMCAI 2024, Proceedings
A2 - Dimitrova, Rayna
A2 - Lahav, Ori
A2 - Wolff, Sebastian
PB - Springer Science and Business Media Deutschland GmbH
T2 - 25th International Conference on Verification, Model Checking, and Abstract Interpretation, VMCAI 2024 was co-located with 51st ACM SIGPLAN Symposium on Principles of Programming Languages, POPL 2024
Y2 - 15 January 2024 through 16 January 2024
ER -