A Tale of Two Approximations: Tightening Over-Approximation for DNN Robustness Verification via Under-Approximation

  • Zhiyi Xue
  • , Si Liu
  • , Zhaodi Zhang
  • , Yiting Wu
  • , Min Zhang*
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Scopus citations

Abstract

The robustness of deep neural networks (DNNs) is crucial to the hosting system's reliability and security. Formal verification has been demonstrated to be effective in providing provable robustness guarantees. To improve its scalability, over-approximating the non-linear activation functions in DNNs by linear constraints has been widely adopted, which transforms the verification problem into an efficiently solvable linear programming problem. Many efforts have been dedicated to defining the so-called tightest approximations to reduce overestimation imposed by over-approximation. In this paper, we study existing approaches and identify a dominant factor in defining tight approximation, namely the approximation domain of the activation function. We find out that tight approximations defined on approximation domains may not be as tight as the ones on their actual domains, yet existing approaches all rely only on approximation domains. Based on this observation, we propose a novel dual-approximation approach to tighten overapproximations, leveraging an activation function's underestimated domain to define tight approximation bounds. We implement our approach with two complementary algorithms based respectively on Monte Carlo simulation and gradient descent into a tool called DualApp. We assess it on a comprehensive benchmark of DNNs with different architectures. Our experimental results show that DualApp significantly outperforms the state-of-the-art approaches with 100% - 1000% improvement on the verified robustness ratio and 10.64% on average (up to 66.53%) on the certified lower bound.

Original languageEnglish
Title of host publicationISSTA 2023 - Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis
EditorsRene Just, Gordon Fraser
PublisherAssociation for Computing Machinery, Inc
Pages1182-1194
Number of pages13
ISBN (Electronic)9798400702211
DOIs
StatePublished - 12 Jul 2023
Event32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2023 - Seattle, United States
Duration: 17 Jul 202321 Jul 2023

Publication series

NameISSTA 2023 - Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis

Conference

Conference32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2023
Country/TerritoryUnited States
CitySeattle
Period17/07/2321/07/23

Keywords

  • Deep neural network
  • over-approximation
  • robustness verification

Fingerprint

Dive into the research topics of 'A Tale of Two Approximations: Tightening Over-Approximation for DNN Robustness Verification via Under-Approximation'. Together they form a unique fingerprint.

Cite this