Abstract
Cross-Device Federated Learning (CDFL) systems enable fully decentralized training networks whereby each participating device can act as a model-owner and a model-producer. CDFL systems need to ensure fairness, trustworthiness, and high-quality model availability across all the participants in the underlying training networks. This paper presents a blockchain-based framework, TrustFed, for CDFL systems to detect the model poisoning attacks, enable fair training settings, and maintain the participating devices' reputation. TrustFed provides fairness by detecting and removing the attackers from the training distributions. It uses blockchain smart contracts to maintain participating devices' reputations to compel the participants in bringing active and honest model contributions. We implemented the TrustFed using a python-simulated FL framework, blockchain smart contracts, and statistical outlier detection techniques. We tested it over the large-scale Industrial Internet of Things (IIoT) dataset and multiple attack models. We found that TrustFed produces better results regarding multiple aspects compared with the conventional baseline approaches.
Original language | British English |
---|---|
Journal | IEEE Transactions on Industrial Informatics |
DOIs | |
State | Accepted/In press - 2021 |
Keywords
- Blockchain
- Blockchain
- Computational modeling
- Data models
- Fairness
- Federated Learning
- IIoT
- Industrial Internet of Things
- Performance evaluation
- Reputation
- Security
- Servers
- Training
- Trust