Model-Based Reinforcement Learning for Closed-Loop Dynamic Control of Soft Robotic Manipulators

Thomas George Thuruthel, Egidio Falotico, Federico Renda, Cecilia Laschi

Research output: Contribution to journalArticlepeer-review

230 Scopus citations


Dynamic control of soft robotic manipulators is an open problem yet to be well explored and analyzed. Most of the current applications of soft robotic manipulators utilize static or quasi-dynamic controllers based on kinematic models or linearity in the joint space. However, such approaches are not truly exploiting the rich dynamics of a soft-bodied system. In this paper, we present a model-based policy learning algorithm for closed-loop predictive control of a soft robotic manipulator. The forward dynamic model is represented using a recurrent neural network. The closed-loop policy is derived using trajectory optimization and supervised learning. The approach is verified first on a simulated piecewise constant strain model of a cable driven under-Actuated soft manipulator. Furthermore, we experimentally demonstrate on a soft pneumatically actuated manipulator how closed-loop control policies can be derived that can accommodate variable frequency control and unmodeled external loads.

Original languageBritish English
Article number8531756
Pages (from-to)127-134
Number of pages8
JournalIEEE Transactions on Robotics
Issue number1
StatePublished - Feb 2019


  • Dynamic control
  • machine learning
  • manipulation
  • reinforcement learning
  • soft robotics


Dive into the research topics of 'Model-Based Reinforcement Learning for Closed-Loop Dynamic Control of Soft Robotic Manipulators'. Together they form a unique fingerprint.

Cite this