TY - JOUR
T1 - Green, Quantized Federated Learning Over Wireless Networks
T2 - An Energy-Efficient Design
AU - Kim, Minsu
AU - Saad, Walid
AU - Mozaffari, Mohammad
AU - Debbah, Merouane
N1 - Publisher Copyright:
© 2002-2012 IEEE.
PY - 2024/2/1
Y1 - 2024/2/1
N2 - The practical deployment of federated learning (FL) over wireless networks requires balancing energy efficiency, convergence rate, and a target accuracy due to the limited available resources of devices. Prior art on FL often trains deep neural networks (DNNs) to achieve high accuracy and fast convergence using 32 bits of precision level. However, such scenarios will be impractical for resource-constrained devices since DNNs typically have high computational complexity and memory requirements. Thus, there is a need to reduce the precision level in DNNs to reduce the energy expenditure. In this paper, a green-quantized FL framework, which represents data with a finite precision level in both local training and uplink transmission, is proposed. Here, the finite precision level is captured through the use of quantized neural networks (QNNs) that quantize weights and activations in fixed-precision format. In the considered FL model, each device trains its QNN and transmits a quantized training result to the base station. Energy models for the local training and the transmission with quantization are rigorously derived. To minimize the energy consumption and the number of communication rounds simultaneously, a multi-objective optimization problem is formulated with respect to the number of local iterations, the number of selected devices, and the precision levels for both local training and transmission while ensuring convergence under a target accuracy constraint. To solve this problem, the convergence rate of the proposed FL system is analytically derived with respect to the system control variables. Then, the Pareto boundary of the problem is characterized to provide efficient solutions using the normal boundary inspection method. Design insights on balancing the tradeoff between the two objectives while achieving a target accuracy are drawn from using the Nash bargaining solution and analyzing the derived convergence rate. Simulation results show that the proposed FL framework can reduce energy consumption until convergence by up to 70% compared to a baseline FL algorithm that represents data with full precision without damaging the convergence rate.
AB - The practical deployment of federated learning (FL) over wireless networks requires balancing energy efficiency, convergence rate, and a target accuracy due to the limited available resources of devices. Prior art on FL often trains deep neural networks (DNNs) to achieve high accuracy and fast convergence using 32 bits of precision level. However, such scenarios will be impractical for resource-constrained devices since DNNs typically have high computational complexity and memory requirements. Thus, there is a need to reduce the precision level in DNNs to reduce the energy expenditure. In this paper, a green-quantized FL framework, which represents data with a finite precision level in both local training and uplink transmission, is proposed. Here, the finite precision level is captured through the use of quantized neural networks (QNNs) that quantize weights and activations in fixed-precision format. In the considered FL model, each device trains its QNN and transmits a quantized training result to the base station. Energy models for the local training and the transmission with quantization are rigorously derived. To minimize the energy consumption and the number of communication rounds simultaneously, a multi-objective optimization problem is formulated with respect to the number of local iterations, the number of selected devices, and the precision levels for both local training and transmission while ensuring convergence under a target accuracy constraint. To solve this problem, the convergence rate of the proposed FL system is analytically derived with respect to the system control variables. Then, the Pareto boundary of the problem is characterized to provide efficient solutions using the normal boundary inspection method. Design insights on balancing the tradeoff between the two objectives while achieving a target accuracy are drawn from using the Nash bargaining solution and analyzing the derived convergence rate. Simulation results show that the proposed FL framework can reduce energy consumption until convergence by up to 70% compared to a baseline FL algorithm that represents data with full precision without damaging the convergence rate.
KW - computational modeling
KW - energy efficiency
KW - neural networks
KW - Pareto optimization
KW - simulation
KW - Wireless networks
UR - http://www.scopus.com/inward/record.url?scp=85163560217&partnerID=8YFLogxK
U2 - 10.1109/TWC.2023.3289177
DO - 10.1109/TWC.2023.3289177
M3 - Article
AN - SCOPUS:85163560217
SN - 1536-1276
VL - 23
SP - 1386
EP - 1402
JO - IEEE Transactions on Wireless Communications
JF - IEEE Transactions on Wireless Communications
IS - 2
ER -