Abstract
In this letter, we focus on the robustness performance of deep neural networks (DNNs) in the context of channel decoding tasks when confronted with adversarial attacks. Leveraging interval analysis, we verify the robustness of these DNNs against adversarial attacks within a specific power range. We demonstrate that a verified upper bound can serve as an effective metric to quantify the defense capabilities of neural networks against such attacks. The verification can be useful in assessing the security of wireless communication systems using deep learning algorithms.
| Original language | English |
|---|---|
| Pages (from-to) | 843-847 |
| Number of pages | 5 |
| Journal | IEEE Communications Letters |
| Volume | 29 |
| Issue number | 4 |
| DOIs | |
| State | Published - 2025 |
Keywords
- Adversarial attacks
- deep learning
- interval analysis
- neural network decoder