TY - GEN
T1 - Combining touch and vision for the estimation of an object's pose during manipulation
AU - Bimbo, Joao
AU - Seneviratne, Lakmal D.
AU - Althoefer, Kaspar
AU - Liu, Hongbin
PY - 2013
Y1 - 2013
N2 - Robot grasping and manipulation relies mainly on two types of sensory data: vision and tactile sensing. Localisation and recognition of the object is typically done through vision alone, while tactile sensors are commonly used for grasp control. Vision performs reliably in uncluttered environments, but its performance may deteriorate when the object is occluded, which is often the case during a manipulation task, when the object is in-hand and the robot fingers stand between the camera and the object. This paper presents a method to use the robot's sense of touch to refine the knowledge of a manipulated object's pose from an initial estimate provided by vision. The objective is to find a transformation on the object's location that is coherent with the current proprioceptive and tactile sensory data. The method was tested with different object geometries and proposes applications where this method can be used to improve the overall performance of a robotic system. Experimental results show an improvement of around 70% on the estimate of the object's location when compared to using only vision.
AB - Robot grasping and manipulation relies mainly on two types of sensory data: vision and tactile sensing. Localisation and recognition of the object is typically done through vision alone, while tactile sensors are commonly used for grasp control. Vision performs reliably in uncluttered environments, but its performance may deteriorate when the object is occluded, which is often the case during a manipulation task, when the object is in-hand and the robot fingers stand between the camera and the object. This paper presents a method to use the robot's sense of touch to refine the knowledge of a manipulated object's pose from an initial estimate provided by vision. The objective is to find a transformation on the object's location that is coherent with the current proprioceptive and tactile sensory data. The method was tested with different object geometries and proposes applications where this method can be used to improve the overall performance of a robotic system. Experimental results show an improvement of around 70% on the estimate of the object's location when compared to using only vision.
UR - https://www.scopus.com/pages/publications/84893771356
U2 - 10.1109/IROS.2013.6696931
DO - 10.1109/IROS.2013.6696931
M3 - Conference contribution
AN - SCOPUS:84893771356
SN - 9781467363587
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 4021
EP - 4026
BT - IROS 2013
T2 - 2013 26th IEEE/RSJ International Conference on Intelligent Robots and Systems: New Horizon, IROS 2013
Y2 - 3 November 2013 through 8 November 2013
ER -