Multimodal hybrid features in 3D ear recognition

Karthika Ganesan, A. Chilambuchelvan, Iyyakutti Iyappan Ganapathi, Sajid Javed, Naoufel Werghi

Research output: Contribution to journalArticlepeer-review

Abstract

Despite being the most rapidly evolving biometric trait, the ear suffers from a few drawbacks, such as being affected by posture, illumination, and scaling in the two-dimensional domain. To address these issues, researchers focused on the 3D domain, as the intrinsic features of the 3D ear have significantly contributed to performance enhancement. However, it has also been observed that combining 2D and 3D ear features improves recognition better than using 3D ear features alone. This article presents a hybrid descriptor where the feature vectors for ear recognition are derived from the multimodal ear using classical and learning-based approaches. The classical approach generates features using a covariance matrix, whereas the learning-based approach uses a deep auto-encoder to generate features. Further, these features are combined to create a hybrid descriptor. Thorough experiments were carried out on the largest publicly accessible ear database to demonstrate the effectiveness of the proposed approach and compare its performance to that of state-of-the-art techniques.

Original languageBritish English
JournalApplied Intelligence
DOIs
StateAccepted/In press - 2022

Keywords

  • 2D & 3D ear
  • Biometrics
  • Computer vision
  • Deep features
  • Handcrafted features
  • Hybrid descriptor
  • Verification

Fingerprint

Dive into the research topics of 'Multimodal hybrid features in 3D ear recognition'. Together they form a unique fingerprint.

Cite this