Deep user identification model with multiple biometric data

Hyoung Kyu Song, Ebrahim Alalkeem, Jaewoong Yun, Tae Ho Kim, Hyerin Yoo, Dasom Heo, Myungsu Chae, Chan Yeob Yeun

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

Background: Recognition is an essential function of human beings. Humans easily recognize a person using various inputs such as voice, face, or gesture. In this study, we mainly focus on DL model with multi-modality which has many benefits including noise reduction. We used ResNet-50 for extracting features from dataset with 2D data. Results: This study proposes a novel multimodal and multitask model, which can both identify human ID and classify the gender in single step. At the feature level, the extracted features are concatenated as the input for the identification module. Additionally, in our model design, we can change the number of modalities used in a single model. To demonstrate our model, we generate 58 virtual subjects with public ECG, face and fingerprint dataset. Through the test with noisy input, using multimodal is more robust and better than using single modality. Conclusions: This paper presents an end-to-end approach for multimodal and multitask learning. The proposed model shows robustness on the spoof attack, which can be significant for bio-authentication device. Through results in this study, we suggest a new perspective for human identification task, which performs better than in previous approaches.

Original languageBritish English
Article number315
JournalBMC Bioinformatics
Volume21
Issue number1
DOIs
StatePublished - 16 Jul 2020

Keywords

  • Multimodal learning
  • Multitask learning
  • Person identification

Fingerprint

Dive into the research topics of 'Deep user identification model with multiple biometric data'. Together they form a unique fingerprint.

Cite this