Safeguarding Neural Network-Controlled Systems via Formal Methods: From Safety-by-Design to Runtime Assurance (Invited Talk)

  • Min Zhang*
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Neural networks (NNs) exhibit remarkable potential in decision-making and control systems. While neural networks can be trained by sophisticated Deep Reinforcement Learning (DRL) techniques to achieve optimal system performance under various constraints, a significant concern persists: the lack of provable safety guarantees for the trained decision-making models. The intrinsic complexity and opacity of these models make it profoundly challenging to rigorously guarantee their safety under various hosting environments, including the systems they control. Drawing on our experiences, we contend that formal methods are crucial for developing neural network controllers that are not only robust but also certifiable, thereby ensuring system safety from training through deployment. We demonstrate that integrating formal methods into learning process is essential to provide a comprehensive safety guarantee for the controlled systems across their entire design, training, and execution lifecycle.

Original languageEnglish
Title of host publicationTheoretical Aspects of Software Engineering - 19th International Symposium, TASE 2025, Proceedings
EditorsPhilipp Rümmer, Zhilin Wu
PublisherSpringer Science and Business Media Deutschland GmbH
Pages3-10
Number of pages8
ISBN (Print)9783031982071
DOIs
StatePublished - 2026
Event19th International Symposium on Theoretical Aspects of Software Engineering, TASE 2025 - Limassol, Cyprus
Duration: 14 Jul 202516 Jul 2025

Publication series

NameLecture Notes in Computer Science
Volume15841 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference19th International Symposium on Theoretical Aspects of Software Engineering, TASE 2025
Country/TerritoryCyprus
CityLimassol
Period14/07/2516/07/25

Fingerprint

Dive into the research topics of 'Safeguarding Neural Network-Controlled Systems via Formal Methods: From Safety-by-Design to Runtime Assurance (Invited Talk)'. Together they form a unique fingerprint.

Cite this