TY - JOUR
T1 - TransHD
T2 - Spatial Transformer Features Extraction for HDC Synergetic Learning
AU - Hassan, Eman
AU - Bettayeb, Meriem
AU - Halawani, Yasmin
AU - Genssler, Paul R.
AU - Tesfai, Huruy Tekle
AU - Zweiri, Yahya
AU - Amrouch, Hussam
AU - Hadjileontiadis, Leontios J.
AU - Saleh, Hani
AU - Mohammad, Baker
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2025
Y1 - 2025
N2 - Artificial intelligence (AI) relies on pattern recognition and classification algorithms to achieve accurate and efficient decision-making. Convolutional neural networks (CNNs) are the state-of-the-art for processing 2D image data, but their high computational and data requirements limit their use in resource-constrained environments. Hyperdimensional computing (HDC) offers a lightweight alternative, excelling in 1D tasks, but struggles to achieve competitive accuracy for 2D image classification. Hybrid frameworks combining HDC with CNNs have been proposed to improve accuracy but inherit the computational demands of deep learning, making them unsuitable for edge and IoT devices. To address this, we propose TransHD, a framework integrating lightweight spatial transformer networks (STNs) with HDC to enhance image classification performance while maintaining efficiency. TransHD achieves up to 9% higher accuracy than base HDC models on MNIST and Fashion-MNIST using only 30% of the training data, and reduces computational complexity by 2.5x through optimized STN feature map usage. On resource-constrained platforms like the Raspberry Pi 4, TransHD accelerates inference times by 3.4x and improves energy efficiency by 3.2x, with an accuracy trade-off of approximately 3% compared to CNNs. This study demonstrates the potential of combining HDC with STNs to develop efficient AI solutions for IoT and edge computing, where low energy consumption and computational efficiency are critical.
AB - Artificial intelligence (AI) relies on pattern recognition and classification algorithms to achieve accurate and efficient decision-making. Convolutional neural networks (CNNs) are the state-of-the-art for processing 2D image data, but their high computational and data requirements limit their use in resource-constrained environments. Hyperdimensional computing (HDC) offers a lightweight alternative, excelling in 1D tasks, but struggles to achieve competitive accuracy for 2D image classification. Hybrid frameworks combining HDC with CNNs have been proposed to improve accuracy but inherit the computational demands of deep learning, making them unsuitable for edge and IoT devices. To address this, we propose TransHD, a framework integrating lightweight spatial transformer networks (STNs) with HDC to enhance image classification performance while maintaining efficiency. TransHD achieves up to 9% higher accuracy than base HDC models on MNIST and Fashion-MNIST using only 30% of the training data, and reduces computational complexity by 2.5x through optimized STN feature map usage. On resource-constrained platforms like the Raspberry Pi 4, TransHD accelerates inference times by 3.4x and improves energy efficiency by 3.2x, with an accuracy trade-off of approximately 3% compared to CNNs. This study demonstrates the potential of combining HDC with STNs to develop efficient AI solutions for IoT and edge computing, where low energy consumption and computational efficiency are critical.
KW - Brain-inspired Computing
KW - Hyperdimensional Computing
KW - Image Classification
KW - Spatial Transformers
UR - https://www.scopus.com/pages/publications/105005870782
U2 - 10.1109/TAI.2025.3570283
DO - 10.1109/TAI.2025.3570283
M3 - Article
AN - SCOPUS:105005870782
JO - IEEE Transactions on Artificial Intelligence
JF - IEEE Transactions on Artificial Intelligence
ER -