Real-time grasping strategies using event camera

Xiaoqian Huang, Mohamad Halwani, Rajkumar Muthusamy, Abdulla Ayyad, Dewald Swart, Lakmal Seneviratne, Dongming Gan, Yahya Zweiri

Research output: Contribution to journalArticlepeer-review

18 Scopus citations


Robotic vision plays a key role for perceiving the environment in grasping applications. However, the conventional framed-based robotic vision, suffering from motion blur and low sampling rate, may not meet the automation needs of evolving industrial requirements. This paper, for the first time, proposes an event-based robotic grasping framework for multiple known and unknown objects in a cluttered scene. With advantages of microsecond-level sampling rate and no motion blur of event camera, the model-based and model-free approaches are developed for known and unknown objects’ grasping respectively. The event-based multi-view approach is used to localize the objects in the scene in the model-based approach, and then point cloud processing is utilized to cluster and register the objects. The proposed model-free approach, on the other hand, utilizes the developed event-based object segmentation, visual servoing and grasp planning to localize, align to, and grasp the targeting object. Using a UR10 robot with an eye-in-hand neuromorphic camera and a Barrett hand gripper, the proposed approaches are experimentally validated with objects of different sizes. Furthermore, it demonstrates robustness and a significant advantage over grasping with a traditional frame-based camera in low-light conditions.

Original languageBritish English
Pages (from-to)593-615
Number of pages23
JournalJournal of Intelligent Manufacturing
Issue number2
StatePublished - Feb 2022


  • Event camera
  • Model-based grasping
  • Model-free grasping
  • Multi-object grasping
  • Neuromorphic vision


Dive into the research topics of 'Real-time grasping strategies using event camera'. Together they form a unique fingerprint.

Cite this