Vision and Inertial Sensor Cooperation Using Gravity as a Vertical Reference

Jorge Lobo, Jorge Dias

Research output: Contribution to journalReview articlepeer-review

129 Scopus citations


This paper explores the combination of inertial sensor data with vision. Visual and inertial sensing are two sensory modalities that can be explored to give robust solutions on image segmentation and recovery of 3D structure from images, increasing the capabilities of autonomous robots and enlarging the application potential of vision systems. In biological systems, the information provided by the vestibular system is fused at a very early processing stage with vision, playing a key role on the execution of visual movements such as gaze holding and tracking, and the visual cues aid the spatial orientation and body equilibrium. In this paper, we set a framework for using inertial sensor data in vision systems, and describe some results obtained. The unit sphere projection camera model is used, providing a simple model for inertial data integration. Using the vertical reference provided by the inertial sensors, the image horizon line can be determined. Using just one vanishing point and the vertical, we can recover the camera's focal distance and provide an external bearing for the system's navigation frame of reference. Knowing the geometry of a stereo rig and its pose from the inertial sensors, the collineation of level planes can be recovered, providing enough restrictions to segment and reconstruct vertical features and leveled planar patches.

Original languageBritish English
Pages (from-to)1597-1608
Number of pages12
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Issue number12
StatePublished - Dec 2003


  • Edge and feature detection
  • Image processing and computer vision
  • Sensor fusion


Dive into the research topics of 'Vision and Inertial Sensor Cooperation Using Gravity as a Vertical Reference'. Together they form a unique fingerprint.

Cite this