TY - JOUR
T1 - Projected Natural Gradient Method
T2 - Unveiling Low-Power Perturbation Vulnerabilities in Deep-Learning-Based Automatic Modulation Classification
AU - Chiheb Ben Nasr, Mohamed
AU - Freitas De Araujo-Filho, Paulo
AU - Kaddoum, Georges
AU - Mourad, Azzam
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2024
Y1 - 2024
N2 - Rapid advancements in deep learning (DL) and the availability of the large data sets have made the adoption of DL highly appealing across various fields. Wireless communication systems, including future 6G systems are anticipated to incorporate intelligent components like automatic modulation classification (AMC) for the cognitive radio and dynamic spectrum access. However, DL-based AMC models are susceptible to the adversarial attacks, which consist of crafted perturbations that aim to alternate the decision of a victim model. This study focuses on investigating and uncovering modern modulation classifiers' vulnerability to the adversarial threats. Though attacks of this nature inherently jeopardize DL-based classifiers, contemporary attack methods typically exhibit diminished impact at the lower perturbation levels. Therefore, we introduce a novel attack approach that exploits the Riemannian manifold properties of the intricate neural networks, yielding adversarial samples with heightened efficacy at the lower perturbation powers. We thoroughly evaluate how effective various defense techniques are and demonstrate our proposed attack method's ability to thwart them. The findings of this study shed light on the limitations and vulnerabilities of the DL-based AMC models in the face of the adversarial attacks. By addressing these challenges, we can enhance the robustness and security of these models, and pave the way for their reliable deployment in practical wireless communication systems, including the future 6G networks.
AB - Rapid advancements in deep learning (DL) and the availability of the large data sets have made the adoption of DL highly appealing across various fields. Wireless communication systems, including future 6G systems are anticipated to incorporate intelligent components like automatic modulation classification (AMC) for the cognitive radio and dynamic spectrum access. However, DL-based AMC models are susceptible to the adversarial attacks, which consist of crafted perturbations that aim to alternate the decision of a victim model. This study focuses on investigating and uncovering modern modulation classifiers' vulnerability to the adversarial threats. Though attacks of this nature inherently jeopardize DL-based classifiers, contemporary attack methods typically exhibit diminished impact at the lower perturbation levels. Therefore, we introduce a novel attack approach that exploits the Riemannian manifold properties of the intricate neural networks, yielding adversarial samples with heightened efficacy at the lower perturbation powers. We thoroughly evaluate how effective various defense techniques are and demonstrate our proposed attack method's ability to thwart them. The findings of this study shed light on the limitations and vulnerabilities of the DL-based AMC models in the face of the adversarial attacks. By addressing these challenges, we can enhance the robustness and security of these models, and pave the way for their reliable deployment in practical wireless communication systems, including the future 6G networks.
KW - Adaptive adversarial training
KW - adversarial attack
KW - automatic modulation classification (AMC)
KW - energy-efficient attacks
KW - natural gradient
KW - white-box attack
UR - http://www.scopus.com/inward/record.url?scp=105002090781&partnerID=8YFLogxK
U2 - 10.1109/JIOT.2024.3439440
DO - 10.1109/JIOT.2024.3439440
M3 - Article
AN - SCOPUS:105002090781
SN - 2327-4662
VL - 11
SP - 37032
EP - 37044
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 22
ER -