Person Gender Classification on RGB-D Data with Self-Joint Attention

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Automatic gender classification has many potential applications including automatic annotation of images, video surveillance, security, and human-computer interaction. In the last decades, many research works focused on classifying gender using the cues from 2D images of the person's frontal view. This limits their application in the real world. Also, classifying a person's gender from a different view, pose and scale are still challenging problems. RGB-D images, containing color and depth images, have more immunity to this problem than 2D images. Recent approaches using RGB-D images explored different combinations of features descriptors from depth and color images to classify the gender. These methods used low-level features extracted from the depth and color separately, ignoring the inter-dependency between the two modalities. In this article, we proposed a deep learning-based approach using a self-joint attention mechanism for human gender classification on RGB-D images. The proposed attention mechanism is designed to encode the inter-dependent information between the depth and color images to enhance the feature discrimination power. We benchmarked our method on a challenging gender dataset that consists of different views, poses, and scales. The presented method outperforms the state-of-Art methods with an improvement in the accuracy of 5.2%, 7.5%, and 8.7% on three different test data.

Original languageBritish English
Pages (from-to)166303-166313
Number of pages11
JournalIEEE Access
Volume9
DOIs
StatePublished - 2021

Keywords

  • attention
  • Gender classification
  • RGB-D image

Fingerprint

Dive into the research topics of 'Person Gender Classification on RGB-D Data with Self-Joint Attention'. Together they form a unique fingerprint.

Cite this