Inertial-visual fusion for camera network calibration

Hadi Aliakbarpour, Jorge Dias

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

6 Scopus citations


This paper proposes a novel technique to calibrate a network of cameras by fusion of inertial-visual data. There is a set of still cameras (structure) and one (or more) mobile agent(s) camera in the network. Each camera within the network is assumed to be rigidly coupled with an Inertial Sensor (IS). By fusion of inertial and visual data, it becomes possible to consider a virtual camera beside of each camera within the network, using the concept of infinite homography. The mentioned virtual camera is downward-looking, its optical axis is parallel to the gravity and has a horizontal image plane. Taking advantage of the defined virtual cameras, the transformations between cameras are estimated by knowing just the heights of two arbitrary points with respect to one camera within the structure network. The proposed approach is notably fast and it requires a minimum human interaction. Another novelty of this method is its applicability for dynamic moving cameras (robots) in order to calibrate the cameras and consequently localizing the robots, as long as that the two marked points are visible by them.

Original languageBritish English
Title of host publicationProceedings - INDIN 2011
Subtitle of host publication2011 9th IEEE International Conference on Industrial Informatics
Number of pages6
StatePublished - 2011
Event2011 9th IEEE International Conference on Industrial Informatics, INDIN 2011 - Lisbon, Portugal
Duration: 26 Jul 201129 Jul 2011

Publication series

NameIEEE International Conference on Industrial Informatics (INDIN)
ISSN (Print)1935-4576


Conference2011 9th IEEE International Conference on Industrial Informatics, INDIN 2011


  • calibration
  • camera network
  • inertial data
  • Inertial Sensor (IS)
  • infinite homography
  • mobile robot and virtual camera
  • Sensor fusion


Dive into the research topics of 'Inertial-visual fusion for camera network calibration'. Together they form a unique fingerprint.

Cite this