Robust ML model ensembles via risk-driven anti-clustering of training data

Lara Mauri, Bruno Apolloni, Ernesto Damiani

    Research output: Contribution to journalArticlepeer-review

    4 Scopus citations

    Abstract

    In this paper, we improve the robustness of Machine Learning (ML) classifiers against training-time attacks by linking the risk of training data being tampered with to the redundancy in the ML model's design needed to prevent it. Our defense mechanism is directly applicable to classifiers' training data, without any knowledge of the specific ML model to be hardened. First, we compute the training data proximity to class separation surfaces, identified via a reference linear model. Each data point is associated with a risk index, which is used to partition the training set by an unsupervised technique. Then, we train a learner for each partition and combine the learners' output in an ensemble. Our method treats the protected ML classifier as a black box and is inherently robust to transfer attacks. Experiments show that, for data poisoning rates between 6 and 25 percent of the training set, our method is more robust compared to benchmarks and to a monolithic version of the model trained on the whole training set. Our results make a convincing case for adopting training set partitioning and ensemble generation as a stage of ML models' development and deployment lifecycle.

    Original languageBritish English
    Pages (from-to)122-140
    Number of pages19
    JournalInformation Sciences
    Volume633
    DOIs
    StatePublished - Jul 2023

    Keywords

    • Adversarial machine learning
    • Machine learning security
    • Poisoning attack
    • Risk modeling
    • Robust ensemble models
    • Training set partitioning

    Fingerprint

    Dive into the research topics of 'Robust ML model ensembles via risk-driven anti-clustering of training data'. Together they form a unique fingerprint.

    Cite this