Distributed reinforcement learning frameworks for cooperative retransmission in wireless networks

Ghasem Naddafzadeh-Shirazi, Peng Yong Kong, Chen Khong Tham

Research output: Contribution to journalArticlepeer-review

35 Scopus citations

Abstract

We address the problem of cooperative retransmission in the media access control (MAC) layer of a distributed wireless network with spatial reuse, where there can be multiple concurrent transmissions from the source and relay nodes. We propose a novel Markov decision process (MDP) framework for adjusting the transmission powers and transmission probabilities in the source and relay nodes to achieve the highest network throughput per unit of consumed energy. We also propose distributed methods that avoid solving a centralized MDP model with a large number of states by employing model-free reinforcement learning (RL) algorithms. We show the convergence to a local solution and compute a lower bound for the performance of the proposed RL algorithms. We further empirically confirm that the proposed learning schemes are robust to collisions and are scalable with regard to the network size and can provide significant cooperative diversity while enjoying low complexity and fast convergence.

Original languageBritish English
Article number5512673
Pages (from-to)4157-4162
Number of pages6
JournalIEEE Transactions on Vehicular Technology
Volume59
Issue number8
DOIs
StatePublished - Oct 2010

Keywords

  • Distributed Markov decision process (MDP) for wireless networks
  • media access control (MAC) cooperative retransmission
  • reinforcement learning (RL)

Fingerprint

Dive into the research topics of 'Distributed reinforcement learning frameworks for cooperative retransmission in wireless networks'. Together they form a unique fingerprint.

Cite this