Abstract
The development of deep learning-based biometric models that can be deployed on devices with constrained memory and computational resources has proven to be a significant challenge. Previous approaches to this problem have not prioritized the reduction of feature map redundancy, but the introduction of Ghost modules represents a major innovation in this area. Ghost modules use a series of inexpensive linear transformations to extract additional feature maps from a set of intrinsic features, allowing for a more comprehensive representation of the underlying information. GhostNetV1 and GhostNetV2, both of which are based on Ghost modules, serve as the foundation for a group of lightweight face recognition models called GhostFaceNets. GhostNetV2 expands upon the original GhostNetV1 by adding an attention mechanism to capture long-range dependencies. Evaluation of GhostFaceNets using various benchmarks reveals that these models offer superior performance while requiring a computational complexity of approximately 60-275 MFLOPs. This is significantly lower than that of State-Of-The-Art (SOTA) big convolutional neural network (CNN) models, which can require hundreds of millions of FLOPs. GhostFaceNets trained with the ArcFace loss on the refined MS-Celeb-1M dataset demonstrate SOTA performance on all benchmarks. In comparison to previous SOTA mobile CNNs, GhostFaceNets greatly improve efficiency for face verification tasks. The GhostFaceNets code is available at: https://github.com/HamadYA/GhostFaceNets.
Original language | British English |
---|---|
Pages (from-to) | 35429-35446 |
Number of pages | 18 |
Journal | IEEE Access |
Volume | 11 |
DOIs | |
State | Published - 2023 |
Keywords
- ArcFace
- attention mechanism
- cheap operations
- face recognition
- GhostNet
- lightweight