Feature-level and pixel-level fusion routines when coupled to infrared night-vision tracking scheme

Yi Zhou, Abedalroof Mayyas, Ala Qattawi, Mohammed Omar

Research output: Contribution to journalArticlepeer-review

21 Scopus citations

Abstract

This manuscript evaluates the feature-based and the pixel-based fusion schemes quantitatively when applied to fuse infrared LWIR and visible TV sequences. The input sequence is from a commercial night-vision module dedicated for automotive applications. The text presents an in-house feature-level fusion routine that applies three fusing relationships; intersection, disjointing and inclusion, in addition to a new objects tracking routine. The processing is done for two specific night driving scenarios; a passing vehicle and an approaching vehicle with glare. The study presents the feature-level fusion details that include; a registration done at the hardware-level, a Gaussian-based preprocessing, a feature extraction subroutine, and finally the fusing logic. The evaluation criteria are based on the retrieved objects morphology and the number of features extracted. Presented comparison show that feature-level is more robust over variations in intensity of input channels and provides higher signal to noise ratio; 6.18 compared to 4.72 for the pixel-level case. Additionally, this study indicates that the pixel-level extracts more information from the channel with higher intensity while the feature-level highlights the input with higher number of features.

Original languageBritish English
Pages (from-to)43-49
Number of pages7
JournalInfrared Physics and Technology
Volume53
Issue number1
DOIs
StatePublished - Jan 2010

Keywords

  • Feature-based fusion
  • Gaussian filtering
  • Night vision
  • Pixel-level fusion
  • Weighted average

Fingerprint

Dive into the research topics of 'Feature-level and pixel-level fusion routines when coupled to infrared night-vision tracking scheme'. Together they form a unique fingerprint.

Cite this