A Survey on Explainable Artificial Intelligence for Cybersecurity

Gaith Rjoub, Jamal Bentahar, Omar Abdel Wahab, Rabeb Mizouni, Alyssa Song, Robin S. Cohen, Hadi Otrok, Azzam Mourad

Research output: Contribution to journalArticlepeer-review

16 Scopus citations

Abstract

The 'black-box' nature of artificial intelligence (AI) models has been the source of many concerns in their use for critical applications. Explainable Artificial Intelligence (XAI) is a rapidly growing research field that aims to create machine learning models that can provide clear and interpretable explanations for their decisions and actions. In the field of cybersecurity, XAI has the potential to revolutionize the way we approach network and system security by enabling us to better understand the behavior of cyber threats and to design more effective defenses. In this survey, we review the state of the art in XAI for cybersecurity and explore the various approaches that have been proposed to address this important problem. The review follows a systematic classification of cybersecurity threats and issues in networks and digital systems. We discuss the challenges and limitations of current XAI methods in the context of cybersecurity and outline promising directions for future research. © 2004-2012 IEEE.
Original languageUndefined/Unknown
Pages (from-to)5115-5140
Number of pages26
JournalIEEE Transactions on Network and Service Management
Volume20
Issue number4
DOIs
StatePublished - 2023

Keywords

  • Artificial intelligence
  • Learning systems
  • Network security
  • Black boxes
  • Critical applications
  • Cyber security
  • Explainable artificial intelligence (XAI)
  • Intelligence models
  • Interpretability
  • Research fields
  • Robustness
  • Systematic
  • Trustworthiness
  • Cybersecurity

Cite this