Abstract
In this paper we propose a real-time system that extracts information from dense relative depth maps. This method enables the integration of depth cues on higher level processes including segmentation of structures, object recognition, robot navigation or any other task that requires a three-dimensional representation of the physical environment. Inertial sensors coupled to a vision system can provide important inertial cues for the ego-motion and system pose. The sensed gravity provides a vertical reference. Depth maps obtained from a stereo camera system can be segmented using this vertical reference, identifying structures such as vertical features and levelled planes. In our work we explore the integration of inertial sensor data in vision systems. Depth maps obtained by vision systems, are very point of view dependant, providing discrete layers of detected depth aligned with the camera. In this work we use inertial sensors to recover camera pose, and rectify the maps to a reference ground plane, enabling the segmentation of vertical and horizontal geometric features. The aim of this work is a fast real-time system, so that it can be applied to autonomous robotic systems or to automated car driving systems, for modelling the road, identifying obstacles and roadside features in real-time.
Original language | British English |
---|---|
Pages | 92-97 |
Number of pages | 6 |
State | Published - 2002 |
Event | 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems - Lausanne, Switzerland Duration: 30 Sep 2002 → 4 Oct 2002 |
Conference
Conference | 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems |
---|---|
Country/Territory | Switzerland |
City | Lausanne |
Period | 30/09/02 → 4/10/02 |