Protecting machine learning from poisoning attacks: A risk-based approach

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

The ever-increasing interest in and widespread diffusion of Machine Learning (ML)-based applications has driven a substantial amount of research into offensive and defensive ML. ML models can be attacked from different angles: poisoning attacks, the focus of this paper, inject maliciously crafted data points in the training set to modify the model behavior; adversarial attacks maliciously manipulate inference-time data points to fool the ML model and drive the prediction of the ML model according to the attacker's objective. Ensemble-based techniques are among the most relevant defenses against poisoning attacks and replace the monolithic ML model with an ensemble of ML models trained on different (disjoint) subsets of the training set. They assign data points to the training sets of the models in the ensemble (routing) randomly or using a hash function, assuming that evenly distributing poisoned data points positively influences ML robustness. Our paper departs from this assumption and implements a risk-based ensemble technique where a risk management process is used to perform a smart routing of data points to the training sets. An extensive experimental evaluation demonstrates the effectiveness of the proposed approach in terms of its soundness, robustness, and performance.

Original languageBritish English
Article number104468
JournalComputers and Security
Volume155
DOIs
StatePublished - Aug 2025

Keywords

  • Ensemble
  • Machine learning
  • Poisoning
  • Risk
  • Robustness

Fingerprint

Dive into the research topics of 'Protecting machine learning from poisoning attacks: A risk-based approach'. Together they form a unique fingerprint.

Cite this