A quantization-based technique for privacy preserving distributed learning

Research output: Contribution to journalArticlepeer-review

Abstract

The distributed training of machine learning (ML) models presents significant challenges in ensuring data and parameter protection. Privacy-enhancing technologies (PETs) offer a promising initial step towards addressing these concerns, yet achieving confidentiality and differential privacy in distributed learning remains complex. This paper introduces a novel data protection technique tailored for the distributed training of ML models, ensuring compliance with regulatory standards. Our approach utilizes a quantized multi-hash data representation, known as Hash-Comb, combined with randomization to achieve Rényi differential privacy (RDP) for both training data and model parameters. The training protocol is designed to require only the common knowledge of a few hyper-parameters, which are securely shared using multi-party computation protocols. Experimental results demonstrate the effectiveness of our method in preserving both privacy and model accuracy.

Original languageBritish English
Article number107741
JournalFuture Generation Computer Systems
Volume167
DOIs
StatePublished - Jun 2025

Keywords

  • Confidentiality
  • Differential privacy
  • Hashing
  • Random quantization

Fingerprint

Dive into the research topics of 'A quantization-based technique for privacy preserving distributed learning'. Together they form a unique fingerprint.

Cite this