TY - JOUR
T1 - QoS-Enabled Wireless Split Federated Learning
T2 - A Reinforcement Learning and Optimization Approach
AU - Khan, Latif U.
AU - Guizani, Maher
AU - Muhaidat, Sami
AU - Ayyash, Moussa
N1 - Publisher Copyright:
© 1975-2011 IEEE.
PY - 2025
Y1 - 2025
N2 - Federated learning (FL) has various advantages, including reduced communication overhead and improved privacy protection for situations involving frequent data production. FL involves training local models on end devices before transferring them to the cloud or network edge for global aggregation. End-devices can use this aggregated model to enhance their local models. Until convergence is reached, this iterative procedure is carried out repeatedly. FL has many benefits, but it also has several drawbacks. Constraints in computing resources are the most notable. In general, end-devices lack the computational capacity to learn local models effectively. To solve this issue, the split FL (SFL) was developed. However, wireless resource limitations and uncertainty also make SFL difficult to enable. For SFL at the network edge, we propose a joint task-offloading, resource allocation, and end-devices computing resources optimization problem. Moreover, we consider the quality of service (QoS) constraint in terms of latency for SFL. Because our problem involves both continuous and binary variables, it is a mixed-integer non-linear programming problem and is challenging to solve. We provide a solution based on optimization and a double deep Q-network (DDQN) with dueling. We use comprehensive simulations to validate the proposed approach. Our proposed scheme, namely, DDQN+dueling, outperforms traditional DDQN-based schemes in terms of faster convergence and attaining QoS for various configurations.
AB - Federated learning (FL) has various advantages, including reduced communication overhead and improved privacy protection for situations involving frequent data production. FL involves training local models on end devices before transferring them to the cloud or network edge for global aggregation. End-devices can use this aggregated model to enhance their local models. Until convergence is reached, this iterative procedure is carried out repeatedly. FL has many benefits, but it also has several drawbacks. Constraints in computing resources are the most notable. In general, end-devices lack the computational capacity to learn local models effectively. To solve this issue, the split FL (SFL) was developed. However, wireless resource limitations and uncertainty also make SFL difficult to enable. For SFL at the network edge, we propose a joint task-offloading, resource allocation, and end-devices computing resources optimization problem. Moreover, we consider the quality of service (QoS) constraint in terms of latency for SFL. Because our problem involves both continuous and binary variables, it is a mixed-integer non-linear programming problem and is challenging to solve. We provide a solution based on optimization and a double deep Q-network (DDQN) with dueling. We use comprehensive simulations to validate the proposed approach. Our proposed scheme, namely, DDQN+dueling, outperforms traditional DDQN-based schemes in terms of faster convergence and attaining QoS for various configurations.
KW - double deep Q-network
KW - Federated learning
KW - resource-constrained consumer electronic devices
KW - split federated learning
UR - https://www.scopus.com/pages/publications/105010213851
U2 - 10.1109/TCE.2025.3587176
DO - 10.1109/TCE.2025.3587176
M3 - Article
AN - SCOPUS:105010213851
SN - 0098-3063
JO - IEEE Transactions on Consumer Electronics
JF - IEEE Transactions on Consumer Electronics
ER -