Safe Reinforcement Learning for NN-controlled Systems with Neural Barrier Certificate Guidance

Hanrui Zhao, Mengxin Ren, Banglong Liu, Niuniu Qi, Xia Zeng, Zhenbing Zeng, Zhengfeng Yang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Safe controller synthesis is crucial for safety-critical applications. This paper presents a novel reinforcement learning approach to synthesize safe controllers for NN-controlled systems. The core idea leverages an iterative scheme that combines controller learning with neural barrier certificate (BC) verification, ultimately producing a provably safe deep neural network (DNN) controller with formal safety guarantees. The process begins by pre-training a well-performing DNN controller as an “oracle” via deep reinforcement learning (DRL). To formally verify the safety properties of the closed-loop system under the base controller, we devise a formal verification procedure that approximates the DNN controller using polynomial inclusion, followed by synthesizing neural BCs via sum-of-squares (SOS) relaxation. In cases where the base controller is insufficient to yield a real BC, the current spurious BC is incorporated as an additional penalty term to reshape the RL reward function, guiding the iterative refinement for new controllers. We implement an automated tool, NBCRL, and experimental results demonstrate the benefits of our method in terms of efficiency and scalability even for a nonlinear system with dimension up to 12.

Keywords

  • Continuous Dynamical Systems
  • Counterexample Guidance
  • Formal Verification
  • Neural Barrier Certificate
  • Safe Reinforcement Learning

Fingerprint

Dive into the research topics of 'Safe Reinforcement Learning for NN-controlled Systems with Neural Barrier Certificate Guidance'. Together they form a unique fingerprint.

Cite this