TY - GEN
T1 - Explainable AI-based Federated Deep Reinforcement Learning for Trusted Autonomous Driving
AU - Rjoub, Gaith
AU - Bentahar, Jamal
AU - Wahab, Omar Abdel
N1 - Funding Information:
We would like to thank the Natural Sciences and Engineering Research Council of Canada (NSERC), Fonds de Recherche du Québec - Nature et Technologie (FQRNT), and the Department of National Defence of Canada, Innovation for Defence Excellence and Security (IDEaS) program.
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Recently, the concept of autonomous driving became prevalent in the domain of intelligent transportation due to the promises of increased safety, traffic efficiency, fuel economy and reduced travel time. Numerous studies have been conducted in this area to help newcomer vehicles plan their trajectory and velocity. However, most of these proposals only consider trajectory planning using conjunction with a limited data set (i.e., metropolis areas, highways, and residential areas) or assume fully connected and automated vehicle environment. Moreover, these approaches are not explainable and lack trust regarding the contributions of the participating vehicles. To tackle these problems, we design an Explainable Artificial Intelligence (XAI) Federated Deep Reinforcement Learning model to improve the effectiveness and trustworthiness of the trajectory decisions for newcomer Autonomous Vehicles (AVs). When a newcomer AV seeks help for trajectory planning, the edge server launches a federated learning process to train the trajectory and velocity prediction model in a distributed collaborative fashion among participating AVs. One essential challenge in this approach is AVs selection, i.e., how to select the appropriate AVs that should participate in the federated learning process. For this purpose, XAI is first used to compute the contribution of each feature contributed by each vehicle to the overall solution. This helps us compute the trust value for each AV in the model. Then, a trust-based deep reinforcement learning model is put forward to make the selection decisions. Experiments using a real-life dataset show that our solution achieves better performance than benchmark solutions (i.e., Deep Q-Network (DQN), and Random Selection (RS)).
AB - Recently, the concept of autonomous driving became prevalent in the domain of intelligent transportation due to the promises of increased safety, traffic efficiency, fuel economy and reduced travel time. Numerous studies have been conducted in this area to help newcomer vehicles plan their trajectory and velocity. However, most of these proposals only consider trajectory planning using conjunction with a limited data set (i.e., metropolis areas, highways, and residential areas) or assume fully connected and automated vehicle environment. Moreover, these approaches are not explainable and lack trust regarding the contributions of the participating vehicles. To tackle these problems, we design an Explainable Artificial Intelligence (XAI) Federated Deep Reinforcement Learning model to improve the effectiveness and trustworthiness of the trajectory decisions for newcomer Autonomous Vehicles (AVs). When a newcomer AV seeks help for trajectory planning, the edge server launches a federated learning process to train the trajectory and velocity prediction model in a distributed collaborative fashion among participating AVs. One essential challenge in this approach is AVs selection, i.e., how to select the appropriate AVs that should participate in the federated learning process. For this purpose, XAI is first used to compute the contribution of each feature contributed by each vehicle to the overall solution. This helps us compute the trust value for each AV in the model. Then, a trust-based deep reinforcement learning model is put forward to make the selection decisions. Experiments using a real-life dataset show that our solution achieves better performance than benchmark solutions (i.e., Deep Q-Network (DQN), and Random Selection (RS)).
KW - Autonomous Vehicles Selection
KW - Deep Reinforcement Learning
KW - Edge Computing
KW - Explainable Artificial Intelligence
KW - Federated Learning
KW - Trajectory Planning
KW - Trust
UR - http://www.scopus.com/inward/record.url?scp=85135310210&partnerID=8YFLogxK
U2 - 10.1109/IWCMC55113.2022.9824617
DO - 10.1109/IWCMC55113.2022.9824617
M3 - Conference contribution
AN - SCOPUS:85135310210
T3 - 2022 International Wireless Communications and Mobile Computing, IWCMC 2022
SP - 318
EP - 323
BT - 2022 International Wireless Communications and Mobile Computing, IWCMC 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 18th IEEE International Wireless Communications and Mobile Computing, IWCMC 2022
Y2 - 30 May 2022 through 3 June 2022
ER -