Projected Natural Gradient Method: Unveiling Low-Power Perturbation Vulnerabilities in Deep-Learning-Based Automatic Modulation Classification

Mohamed Chiheb Ben Nasr, Paulo Freitas De Araujo-Filho, Georges Kaddoum, Azzam Mourad

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Rapid advancements in deep learning (DL) and the availability of the large data sets have made the adoption of DL highly appealing across various fields. Wireless communication systems, including future 6G systems are anticipated to incorporate intelligent components like automatic modulation classification (AMC) for the cognitive radio and dynamic spectrum access. However, DL-based AMC models are susceptible to the adversarial attacks, which consist of crafted perturbations that aim to alternate the decision of a victim model. This study focuses on investigating and uncovering modern modulation classifiers' vulnerability to the adversarial threats. Though attacks of this nature inherently jeopardize DL-based classifiers, contemporary attack methods typically exhibit diminished impact at the lower perturbation levels. Therefore, we introduce a novel attack approach that exploits the Riemannian manifold properties of the intricate neural networks, yielding adversarial samples with heightened efficacy at the lower perturbation powers. We thoroughly evaluate how effective various defense techniques are and demonstrate our proposed attack method's ability to thwart them. The findings of this study shed light on the limitations and vulnerabilities of the DL-based AMC models in the face of the adversarial attacks. By addressing these challenges, we can enhance the robustness and security of these models, and pave the way for their reliable deployment in practical wireless communication systems, including the future 6G networks.

Original languageBritish English
Pages (from-to)37032-37044
Number of pages13
JournalIEEE Internet of Things Journal
Volume11
Issue number22
DOIs
StatePublished - 2024

Keywords

  • Adaptive adversarial training
  • adversarial attack
  • automatic modulation classification (AMC)
  • energy-efficient attacks
  • natural gradient
  • white-box attack

Fingerprint

Dive into the research topics of 'Projected Natural Gradient Method: Unveiling Low-Power Perturbation Vulnerabilities in Deep-Learning-Based Automatic Modulation Classification'. Together they form a unique fingerprint.

Cite this