TY - JOUR
T1 - Multi-agent Deep Reinforcement Learning-based Task Scheduling and Resource Sharing for O-RAN-empowered Multi-UAV-assisted Wireless Sensor Networks
AU - Betalo, Mesfin Leranso
AU - Leng, Supeng
AU - Abishu, Hayla Nahom
AU - Dharejo, Fayaz Ali
AU - Seid, Abegaz Mohammed
AU - Erbad, Aiman
AU - Naqvi, Rizwan Ali
AU - Zhou, Longyu
AU - Guizani, Mohsen
N1 - Publisher Copyright:
IEEE
PY - 2023
Y1 - 2023
N2 - Wireless sensor networks (WSNs) with ultra-dense sensors are crucial for several industries, such as smart agricultural systems deployed in the fifth generation (5G) and beyond 5G Open Radio Access Networks (O-RAN). The WSNs employ multiple unmanned aerial vehicles (UAVs) to collect data from multiple sensor nodes (SNs) and relay it to the central controller for processing. UAVs also provide resources to SNs and extend the network coverage over a vast geographical area. The O-RAN allows the use of open standards and interfaces to create a wireless network for communications between the UAVs and ground SNs. It enables real-time data transfer, remote control, and other applications that require a reliable and high-speed connection by providing flexibility and reliability for UAV-assisted WSNs to meet the requirements of smart agricultural applications. However, the limited battery life of UAVs, transmission power, and shortage of energy resources SNs make it difficult to collect all the data and relay it to the base station, resulting in inefficient task computation and resource management in smart agricultural systems. In this paper, we propose a joint UAV task scheduling, trajectory planning, and resource-sharing framework for multi-UAV-assisted WSNs for smart agricultural monitoring scenarios that schedule UAVs' charging, data collection, and landing times and allow UAVs to share energy with SNs. The main objective of our proposed framework is to minimize the UAV energy consumption and network latency for effective data collection within a specific time frame. We formulate the multi-objective, which is a non-convex optimization problem, and transform it into a Markov decision process (MDP) with a multi-agent deep reinforcement learning (MADRL) algorithm. The simulation results show that the proposed MADRL algorithm reduces the energy consumption cost when compared to deep Q-network, Greedy, and mixed-integer linear program (MILP) by 61.92%, 68.02%, and 69.9%, respectively.
AB - Wireless sensor networks (WSNs) with ultra-dense sensors are crucial for several industries, such as smart agricultural systems deployed in the fifth generation (5G) and beyond 5G Open Radio Access Networks (O-RAN). The WSNs employ multiple unmanned aerial vehicles (UAVs) to collect data from multiple sensor nodes (SNs) and relay it to the central controller for processing. UAVs also provide resources to SNs and extend the network coverage over a vast geographical area. The O-RAN allows the use of open standards and interfaces to create a wireless network for communications between the UAVs and ground SNs. It enables real-time data transfer, remote control, and other applications that require a reliable and high-speed connection by providing flexibility and reliability for UAV-assisted WSNs to meet the requirements of smart agricultural applications. However, the limited battery life of UAVs, transmission power, and shortage of energy resources SNs make it difficult to collect all the data and relay it to the base station, resulting in inefficient task computation and resource management in smart agricultural systems. In this paper, we propose a joint UAV task scheduling, trajectory planning, and resource-sharing framework for multi-UAV-assisted WSNs for smart agricultural monitoring scenarios that schedule UAVs' charging, data collection, and landing times and allow UAVs to share energy with SNs. The main objective of our proposed framework is to minimize the UAV energy consumption and network latency for effective data collection within a specific time frame. We formulate the multi-objective, which is a non-convex optimization problem, and transform it into a Markov decision process (MDP) with a multi-agent deep reinforcement learning (MADRL) algorithm. The simulation results show that the proposed MADRL algorithm reduces the energy consumption cost when compared to deep Q-network, Greedy, and mixed-integer linear program (MILP) by 61.92%, 68.02%, and 69.9%, respectively.
KW - Autonomous aerial vehicles
KW - Data collection
KW - Energy consumption
KW - Monitoring
KW - multi-agent deep reinforcement learning
KW - Resource management
KW - resource sharing
KW - Task analysis
KW - task scheduling
KW - trajectory planning
KW - unmanned aerial vehicles
KW - Wireless sensor networks
UR - http://www.scopus.com/inward/record.url?scp=85177084299&partnerID=8YFLogxK
U2 - 10.1109/TVT.2023.3330661
DO - 10.1109/TVT.2023.3330661
M3 - Article
AN - SCOPUS:85177084299
SN - 0018-9545
SP - 1
EP - 14
JO - IEEE Transactions on Vehicular Technology
JF - IEEE Transactions on Vehicular Technology
ER -