Abstract
Robotic manipulators have been widely applied in industry, as they can perform repetitive tasks at speeds and accuracy that far exceed those of humans. However, the traditional control methods of robotic manipulators are highly dependent on the control parameters tuning and prior geometry information about the end-effector and the object. Due to these characteristics, applying robotic manipulators in unseen and dynamic environments is challenging. This research focuses on the robotic manipulation problem. We aim to build an online, adaptive learning system that could control the manipulator working in an unseen and dynamic environment.We propose a dynamic grasping framework integrating reinforcement learning (RL) and trajectory prediction. Initially, a reinforcement learning agent learns grasping strategies for static objects. This agent is enhanced with predictions from an LSTM-based trajectory module when grasping a dynamic object. Subsequently, we solve the sim-to-real (Sim2Real) challenge in vision-based robotic manipulation. We create the Seg-CURL method, a low-cost unsupervised RL framework for Sim2Real. It transforms RGB views into semantic segmentation-based canonical domains, tackling Sim2Real at task and observation levels. After that, we propose an imitation learning framework to tackle data inefficiency in deep reinforcement learning. Our approach combines traditional dynamic motion primitive methods with conditional variational autoencoders (cVAE). Only one demonstration is needed to learn one task. Finally, to learn from the datasets containing failure trajectories, we propose an offline reinforcement learning based method with diffusion model.
| Date of Award | 13 May 2024 |
|---|---|
| Original language | American English |
| Supervisor | Hussain (Supervisor) |
Keywords
- Robotic Manipulation
- Deep Reinforcement Learning
- Vision Based Manipulation
- Sim2Real
- Imitation Learning
- Offline Reinforcement Learning
- Diffusion Model