Abstract
We address the problem of cooperative retransmission in the media access control (MAC) layer of a distributed wireless network with spatial reuse, where there can be multiple concurrent transmissions from the source and relay nodes. We propose a novel Markov decision process (MDP) framework for adjusting the transmission powers and transmission probabilities in the source and relay nodes to achieve the highest network throughput per unit of consumed energy. We also propose distributed methods that avoid solving a centralized MDP model with a large number of states by employing model-free reinforcement learning (RL) algorithms. We show the convergence to a local solution and compute a lower bound for the performance of the proposed RL algorithms. We further empirically confirm that the proposed learning schemes are robust to collisions and are scalable with regard to the network size and can provide significant cooperative diversity while enjoying low complexity and fast convergence.
Original language | British English |
---|---|
Article number | 5512673 |
Pages (from-to) | 4157-4162 |
Number of pages | 6 |
Journal | IEEE Transactions on Vehicular Technology |
Volume | 59 |
Issue number | 8 |
DOIs | |
State | Published - Oct 2010 |
Keywords
- Distributed Markov decision process (MDP) for wireless networks
- media access control (MAC) cooperative retransmission
- reinforcement learning (RL)