TY - JOUR
T1 - Real-time grasping strategies using event camera
AU - Huang, Xiaoqian
AU - Halwani, Mohamad
AU - Muthusamy, Rajkumar
AU - Ayyad, Abdulla
AU - Swart, Dewald
AU - Seneviratne, Lakmal
AU - Gan, Dongming
AU - Zweiri, Yahya
N1 - Funding Information:
This work is supported by the Khalifa University of Science and Technology under Award No. CIRA-2018-55 and RC1- 2018-KUCARS, and was performed as part of the Aerospace Research and Innovation Center (ARIC), which is jointly funded by STRATA Manufacturing PJSC (a Mubadala company) and Khalifa University of Science and Technology.
Publisher Copyright:
© 2022, The Author(s).
PY - 2022/2
Y1 - 2022/2
N2 - Robotic vision plays a key role for perceiving the environment in grasping applications. However, the conventional framed-based robotic vision, suffering from motion blur and low sampling rate, may not meet the automation needs of evolving industrial requirements. This paper, for the first time, proposes an event-based robotic grasping framework for multiple known and unknown objects in a cluttered scene. With advantages of microsecond-level sampling rate and no motion blur of event camera, the model-based and model-free approaches are developed for known and unknown objects’ grasping respectively. The event-based multi-view approach is used to localize the objects in the scene in the model-based approach, and then point cloud processing is utilized to cluster and register the objects. The proposed model-free approach, on the other hand, utilizes the developed event-based object segmentation, visual servoing and grasp planning to localize, align to, and grasp the targeting object. Using a UR10 robot with an eye-in-hand neuromorphic camera and a Barrett hand gripper, the proposed approaches are experimentally validated with objects of different sizes. Furthermore, it demonstrates robustness and a significant advantage over grasping with a traditional frame-based camera in low-light conditions.
AB - Robotic vision plays a key role for perceiving the environment in grasping applications. However, the conventional framed-based robotic vision, suffering from motion blur and low sampling rate, may not meet the automation needs of evolving industrial requirements. This paper, for the first time, proposes an event-based robotic grasping framework for multiple known and unknown objects in a cluttered scene. With advantages of microsecond-level sampling rate and no motion blur of event camera, the model-based and model-free approaches are developed for known and unknown objects’ grasping respectively. The event-based multi-view approach is used to localize the objects in the scene in the model-based approach, and then point cloud processing is utilized to cluster and register the objects. The proposed model-free approach, on the other hand, utilizes the developed event-based object segmentation, visual servoing and grasp planning to localize, align to, and grasp the targeting object. Using a UR10 robot with an eye-in-hand neuromorphic camera and a Barrett hand gripper, the proposed approaches are experimentally validated with objects of different sizes. Furthermore, it demonstrates robustness and a significant advantage over grasping with a traditional frame-based camera in low-light conditions.
KW - Event camera
KW - Model-based grasping
KW - Model-free grasping
KW - Multi-object grasping
KW - Neuromorphic vision
UR - http://www.scopus.com/inward/record.url?scp=85122858755&partnerID=8YFLogxK
U2 - 10.1007/s10845-021-01887-9
DO - 10.1007/s10845-021-01887-9
M3 - Article
AN - SCOPUS:85122858755
SN - 0956-5515
VL - 33
SP - 593
EP - 615
JO - Journal of Intelligent Manufacturing
JF - Journal of Intelligent Manufacturing
IS - 2
ER -