Closed-loop stability analysis of deep reinforcement learning controlled systems with experimental validation

Mohammed Basheer Mohiuddin, Igor Boiko, Rana Azzam, Yahya Zweiri

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Trained deep reinforcement learning (DRL) based controllers can effectively control dynamic systems where classical controllers can be ineffective and difficult to tune. However, the lack of closed-loop stability guarantees of systems controlled by trained DRL agents hinders their adoption in practical applications. This research study investigates the closed-loop stability of dynamic systems controlled by trained DRL agents using Lyapunov analysis based on a linear-quadratic polynomial approximation of the trained agent. In addition, this work develops an understanding of the system's stability margin to determine operational boundaries and critical thresholds of the system's physical parameters for effective operation. The proposed analysis is verified on a DRL-controlled system for several simulated and experimental scenarios. The DRL agent is trained using a detailed dynamic model of a non-linear system and then tested on the corresponding real-world hardware platform without any fine-tuning. Experiments are conducted on a wide range of system states and physical parameters and the results have confirmed the validity of the proposed stability analysis (https://youtu.be/QlpeD5sTlPU).

Original languageBritish English
Pages (from-to)1649-1668
Number of pages20
JournalIET Control Theory and Applications
Volume18
Issue number13
DOIs
StatePublished - Sep 2024

Keywords

  • control system analysis
  • cranes
  • iterative learning control
  • learning (artificial intelligence)
  • learning systems
  • neural nets
  • neurocontrollers

Fingerprint

Dive into the research topics of 'Closed-loop stability analysis of deep reinforcement learning controlled systems with experimental validation'. Together they form a unique fingerprint.

Cite this