TY - JOUR
T1 - Knowledge-based reasoning from human grasp demonstrations for robot grasp synthesis
AU - Faria, Diego R.
AU - Trindade, Pedro
AU - Lobo, Jorge
AU - Dias, Jorge
N1 - Funding Information:
Diego R. Faria was born on August 17th, 1979 in Londrina-PR, Brazil. He has been carrying the Ph.D. studies and is researcher at the Institute of Systems and Robotics — Department of Electrical and Computer Engineering — University of Coimbra, Portugal. He has graduated in Information Systems Technology (data computing and information) in 2001, and has finished a Computer Science Specialisation in 2002 at the State University of Londrina, Brazil. He holds an M.Sc. degree in Computer Science from the Federal University of Parana, Brazil, since 2005. During the Ph.D. research, Diego Faria was sponsored by a Ph.D. scholarship from the Portuguese Foundation for Technology and Sciences. He has collaborated as researcher on the European Project HANDLE within the 7°framework FP7 (from 2009 to 2013) and on the European project BACS within the 6°framework FP6 (from 2006 to 2008). His research interests are Robotic Grasping, Multimodal Perception, Imitation Learning, Computer Vision and Pattern Recognition.
Funding Information:
The authors would like to thank Dr. Guillaume Walck from the Institut des Systmes Intelligents et de Robotique, UPMC-Paris, for his work on the integration of all Modules into the General Software and for the tests realized in the robotic platform. We also would like to thank all consortium of HANDLE project for the discussions and cooperative work developed during the progress of the project. The research leading to these results has been partially supported by the HANDLE project , which has received funding from the European Community’s 7th Framework Programme under grant agreement ICT 231640 ; by the Portuguese Foundation for Science and Technology (FCT) , the Robotics Institute at Khalifa University Abu Dhabi-UAE , and the Institute of Systems and Robotics, University of Coimbra, ISR-UC .
PY - 2014/6
Y1 - 2014/6
N2 - Humans excel when dealing with everyday manipulation tasks, being able to learn new skills, and to adapt to different complex environments. This results from a lifelong learning, and also observation of other skilled humans. To obtain similar dexterity with robotic hands, cognitive capacity is needed to deal with uncertainty. By extracting relevant multi-sensor information from the environment (objects), knowledge from previous grasping tasks can be generalized to be applied within different contexts. Based on this strategy, we show in this paper that learning from human experiences is a way to accomplish our goal of robot grasp synthesis for unknown objects. In this article we address an artificial system that relies on knowledge from previous human object grasping demonstrations. A learning process is adopted to quantify probabilistic distributions and uncertainty. These distributions are combined with preliminary knowledge towards inference of proper grasps given a point cloud of an unknown object. In this article, we designed a method that comprises a twofold process: object decomposition and grasp synthesis. The decomposition of objects into primitives is used, across which similarities between past observations and new unknown objects can be made. The grasps are associated with the defined object primitives, so that feasible object regions for grasping can be determined. The hand pose relative to the object is computed for the pre-grasp and the selected grasp. We have validated our approach on a real robotic platform - a dexterous robotic hand. Results show that the segmentation of the object into primitives allows to identify the most suitable regions for grasping based on previous learning. The proposed approach provides suitable grasps, better than more time consuming analytical and geometrical approaches, contributing for autonomous grasping.
AB - Humans excel when dealing with everyday manipulation tasks, being able to learn new skills, and to adapt to different complex environments. This results from a lifelong learning, and also observation of other skilled humans. To obtain similar dexterity with robotic hands, cognitive capacity is needed to deal with uncertainty. By extracting relevant multi-sensor information from the environment (objects), knowledge from previous grasping tasks can be generalized to be applied within different contexts. Based on this strategy, we show in this paper that learning from human experiences is a way to accomplish our goal of robot grasp synthesis for unknown objects. In this article we address an artificial system that relies on knowledge from previous human object grasping demonstrations. A learning process is adopted to quantify probabilistic distributions and uncertainty. These distributions are combined with preliminary knowledge towards inference of proper grasps given a point cloud of an unknown object. In this article, we designed a method that comprises a twofold process: object decomposition and grasp synthesis. The decomposition of objects into primitives is used, across which similarities between past observations and new unknown objects can be made. The grasps are associated with the defined object primitives, so that feasible object regions for grasping can be determined. The hand pose relative to the object is computed for the pre-grasp and the selected grasp. We have validated our approach on a real robotic platform - a dexterous robotic hand. Results show that the segmentation of the object into primitives allows to identify the most suitable regions for grasping based on previous learning. The proposed approach provides suitable grasps, better than more time consuming analytical and geometrical approaches, contributing for autonomous grasping.
KW - Human grasp demonstrations
KW - Object shape representation
KW - Probabilistic inference
KW - Robot grasp synthesis
UR - https://www.scopus.com/pages/publications/84899576688
U2 - 10.1016/j.robot.2014.02.003
DO - 10.1016/j.robot.2014.02.003
M3 - Article
AN - SCOPUS:84899576688
SN - 0921-8890
VL - 62
SP - 794
EP - 817
JO - Robotics and Autonomous Systems
JF - Robotics and Autonomous Systems
IS - 6
ER -