Investigating How Data Poising Attacks Can Impact An EEG-Based Federated Learning Model

Shamma Alshebli, Muna Alshehhi, Chan Yeob Yeun

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    Abstract

    Detecting potential security threats from individuals within an organization can be achieved using an Electroencephalogram (EEG), which captures the brain's electrical activity. The concept is based on the premise that certain brainwave patterns might be associated with malicious intentions or deceptive behaviors. Recent research on insider threat detection has utilized traditional machine learning classifiers to recognize patterns in brainwave data that correlate with malicious intent. However, these methods pose privacy and data security concerns because they require access to all user data. A recently introduced framework, Federated Learning (FL), offers a solution to this problem. FL aims to develop a global model classifier without the need to access users' local data, thus safeguarding their privacy and sensitive information. Thus, we developed an FL-based insider threat detection model trained on a dataset that contains the EEG signals of 17 participants captured from five electrodes across five power bands using the Emotiv Insight. The model's accuracy within our framework attained a rate of (94.71%) for MLP. However, this method faces potential security threats and attacks, as clients could act maliciously, or external malicious actors might disrupt the network. Therefore, we additionally explore the data poisoning attacks, emphasizing label-flipping scenarios within our federated learning system for EEG-based insider threat detection and illustrating how factors such as the number of poisoned clients and the percentage of poisoning affect an FL-based system. Based on our findings, a higher number of poisoned clients is much more damaging to FL-based systems and should thus be a focal point of consideration in the security design process of these systems.

    Original languageBritish English
    Title of host publication2nd International Conference on Cyber Resilience, ICCR 2024
    PublisherInstitute of Electrical and Electronics Engineers Inc.
    ISBN (Electronic)9798350394962
    DOIs
    StatePublished - 2024
    Event2nd International Conference on Cyber Resilience, ICCR 2024 - Dubai, United Arab Emirates
    Duration: 26 Feb 202428 Feb 2024

    Publication series

    Name2nd International Conference on Cyber Resilience, ICCR 2024

    Conference

    Conference2nd International Conference on Cyber Resilience, ICCR 2024
    Country/TerritoryUnited Arab Emirates
    CityDubai
    Period26/02/2428/02/24

    Keywords

    • artificial intelligence
    • data poisoning
    • deep learning
    • EEG signals
    • federated learning
    • insider threat
    • label flipping
    • logistic regression
    • machine learning
    • multilayer perceptron
    • single feed-forward network

    Fingerprint

    Dive into the research topics of 'Investigating How Data Poising Attacks Can Impact An EEG-Based Federated Learning Model'. Together they form a unique fingerprint.

    Cite this