Multi-sensor 3D volumetric reconstruction using CUDA

Hadi Aliakbarpour, Luis Almeida, Paulo Menezes, Jorge Dias

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

This paper presents a full-body volumetric reconstruction of a person in a scene using a sensor network, where some of them can be mobile. The sensor network is comprised of couples of camera and inertial sensor (IS). Taking advantage of IS, the 3D reconstruction is performed using no planar ground assumption. Moreover, IS in each couple is used to define a virtual camera whose image plane is horizontal and aligned with the earth cardinal directions. The IS is furthermore used to define a set of inertial planes in the scene. The image plane of each virtual camera is projected onto this set of parallel-horizontal inertial-planes, using some adapted homography functions. A parallel processing architecture is proposed in order to perform human real-time volumetric reconstruction. The real-time characteristic is obtained by implementing the reconstruction algorithm on a graphics processing unit (GPU) using Compute Unified Device Architecture (CUDA). In order to show the effectiveness of the proposed algorithm, a variety of the gestures of a person acting in the scene is reconstructed and demonstrated. Some analyses have been carried out to measure the performance of the algorithm in terms of processing time. The proposed framework has potential to be used by different applications such as smart-room, human behavior analysis and 3D teleconference.[Figure not available: see fulltext.]

Original languageBritish English
Article number6
Pages (from-to)1-14
Number of pages14
Journal3D Research
Volume2
Issue number4
DOIs
StatePublished - 2011

Keywords

  • 3D rendering quality assessment
  • auto-stereoscopic visualization
  • dynamic scenes
  • multi-view camera
  • visual servoing

Fingerprint

Dive into the research topics of 'Multi-sensor 3D volumetric reconstruction using CUDA'. Together they form a unique fingerprint.

Cite this