Robustness Verification of Deep Reinforcement Learning Based Control Systems Using Reward Martingales

Research output: Contribution to journalConference articlepeer-review

4 Scopus citations

Abstract

Deep Reinforcement Learning (DRL) has gained prominence as an effective approach for control systems. However, its practical deployment is impeded by state perturbations that can severely impact system performance. Addressing this critical challenge requires robustness verification about system performance, which involves tackling two quantitative questions: (i) how to establish guaranteed bounds for expected cumulative rewards, and (ii) how to determine tail bounds for cumulative rewards. In this work, we present the first approach for robustness verification of DRL-based control systems by introducing reward martingales, which offer a rigorous mathematical foundation to characterize the impact of state perturbations on system performance in terms of cumulative rewards. Our verified results provide provably quantitative certificates for the two questions. We then show that reward martingales can be implemented and trained via neural networks, against different types of control policies. Experimental results demonstrate that our certified bounds tightly enclose simulation outcomes on various DRL-based control systems, indicating the effectiveness and generality of the proposed approach.

Original languageEnglish
Pages (from-to)19992-20000
Number of pages9
JournalProceedings of the AAAI Conference on Artificial Intelligence
Volume38
Issue number18
DOIs
StatePublished - 25 Mar 2024
Event38th AAAI Conference on Artificial Intelligence, AAAI 2024 - Vancouver, Canada
Duration: 20 Feb 202427 Feb 2024

Fingerprint

Dive into the research topics of 'Robustness Verification of Deep Reinforcement Learning Based Control Systems Using Reward Martingales'. Together they form a unique fingerprint.

Cite this