TY - GEN
T1 - Fused geometry augmented images for analyzing textured mesh
AU - Taha, Bilal
AU - Hayat, Munawar
AU - Berretti, Stefano
AU - Werghi, Naoufel
N1 - Funding Information:
Acknowledgment. This work is supported by a research fund from Cyber-Physical Systems Center (C2PS), Khalifa University, UAE.
Publisher Copyright:
© Springer Nature Switzerland AG 2020.
PY - 2020
Y1 - 2020
N2 - In this paper, we propose a novel multi-modal mesh surface representation fusing texture and geometric data. Our approach defines an inverse mapping between different geometric descriptors computed on the mesh surface or its down-sampled version, and the corresponding 2D texture image of the mesh, allowing the construction of fused geometrically augmented images. This new fused modality enables us to learn feature representations from 3D data in a highly efficient manner by simply employing standard convolutional neural networks in a transfer-learning mode. In contrast to existing methods, the proposed approach is both computationally and memory efficient, preserves intrinsic geometric information and learns highly discriminative feature representation by effectively fusing shape and texture information at data level. The efficacy of our approach is demonstrated for the tasks of facial action unit detection, expression classification, and skin lesion classification, showing competitive performance with state of the art methods.
AB - In this paper, we propose a novel multi-modal mesh surface representation fusing texture and geometric data. Our approach defines an inverse mapping between different geometric descriptors computed on the mesh surface or its down-sampled version, and the corresponding 2D texture image of the mesh, allowing the construction of fused geometrically augmented images. This new fused modality enables us to learn feature representations from 3D data in a highly efficient manner by simply employing standard convolutional neural networks in a transfer-learning mode. In contrast to existing methods, the proposed approach is both computationally and memory efficient, preserves intrinsic geometric information and learns highly discriminative feature representation by effectively fusing shape and texture information at data level. The efficacy of our approach is demonstrated for the tasks of facial action unit detection, expression classification, and skin lesion classification, showing competitive performance with state of the art methods.
KW - Image representation
KW - Learned features
KW - Mesh surface analysis
KW - Surface classification
UR - http://www.scopus.com/inward/record.url?scp=85089609991&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-54407-2_1
DO - 10.1007/978-3-030-54407-2_1
M3 - Conference contribution
AN - SCOPUS:85089609991
SN - 9783030544065
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 3
EP - 12
BT - Smart Multimedia - 2nd International Conference, ICSM 2019, Revised Selected Papers
A2 - McDaniel, Troy
A2 - Berretti, Stefano
A2 - Curcio, Igor D.D.
A2 - Basu, Anup
PB - Springer
T2 - 2nd International Conference on Smart Multimedia, ICSM 2019
Y2 - 16 December 2019 through 18 December 2019
ER -