A Method to Verify Neural Network Decoders Against Adversarial Attacks

  • Kaijie Shen
  • , Chengju Li*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In this letter, we focus on the robustness performance of deep neural networks (DNNs) in the context of channel decoding tasks when confronted with adversarial attacks. Leveraging interval analysis, we verify the robustness of these DNNs against adversarial attacks within a specific power range. We demonstrate that a verified upper bound can serve as an effective metric to quantify the defense capabilities of neural networks against such attacks. The verification can be useful in assessing the security of wireless communication systems using deep learning algorithms.

Original languageEnglish
Pages (from-to)843-847
Number of pages5
JournalIEEE Communications Letters
Volume29
Issue number4
DOIs
StatePublished - 2025

Keywords

  • Adversarial attacks
  • deep learning
  • interval analysis
  • neural network decoder

Fingerprint

Dive into the research topics of 'A Method to Verify Neural Network Decoders Against Adversarial Attacks'. Together they form a unique fingerprint.

Cite this