Improving reinforcement learning based moving object grasping with trajectory prediction

Binzhao Xu, Taimur Hassan, Irfan Hussain

Research output: Contribution to journalArticlepeer-review

1 Scopus citations


Currently, most grasping systems are designed to grasp the static objects only, and grasping dynamic objects has received less attention in the literature. For the traditional manipulation scheme, achieving dynamic grasping requires either a highly precise dynamic model or sophisticated predefined grasping states and gestures, both of which are hard to obtain and tedious to design. In this paper, we develop a novel reinforcement learning (RL)-based dynamic grasping framework with a trajectory prediction module to address these issues. In particular, we divide dynamic grasping into two parts: RL-based grasping strategies learning and trajectory prediction. In the simulation setting, an RL agent is trained to grasp a static object. When this well-trained agent is transferred to the real world, the observation has been augmented with the predicted one from an LSTM-based trajectory prediction module. We validated the proposed method through an experimental setup involving a Baxter manipulator with two finger grippers and an object placed on a moving car. We also evaluated how well RL performs both with and without our intended trajectory prediction. Experiment results demonstrate that our method can grasp the object on different trajectories at various speeds.

Original languageBritish English
Pages (from-to)265-276
Number of pages12
JournalIntelligent Service Robotics
Issue number2
StatePublished - Mar 2024


  • Dynamic grasping
  • Reinforcement learning
  • Sim2real
  • Trajectory prediction


Dive into the research topics of 'Improving reinforcement learning based moving object grasping with trajectory prediction'. Together they form a unique fingerprint.

Cite this