TY - GEN
T1 - SpatialHD
T2 - 5th IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2023
AU - Bettayeb, Meriem
AU - Hassan, Eman
AU - Mohammad, Baker
AU - Saleh, Hani
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Brain-inspired computing methods have shown remarkable efficiency and robustness compared to deep neural networks (DNN). In particular, HyperDimensional Computing (HDC) and Vision Transformer (ViT) have demonstrated promising achievements in facilitating effective and reliable cognitive learning. This paper proposes SpatialHD, the first framework that combines spatial transformer networks (STN) and HDC. First, SpatialHD exploits the STN, which explicitly allows the spatial manipulation of data within the network. Then, it employs HDC to operate over STN output by mapping feature maps into high-dimensional space, learning abstracted information, and classifying data. In addition, the STN output is resized to generate a smaller input feature map. This further reduces computing complexity and memory storage compared to HDC alone. Finally, to test the model's functionality, we applied spatial HD for image classification, utilizing the MNIST and Fashion-MNIST datasets, using only 25% of the dataset for training. Our results show that SpatialHD improves accuracy by ≈ 8% and enhances efficiency by approximately 2.5x compared to base-HDC.
AB - Brain-inspired computing methods have shown remarkable efficiency and robustness compared to deep neural networks (DNN). In particular, HyperDimensional Computing (HDC) and Vision Transformer (ViT) have demonstrated promising achievements in facilitating effective and reliable cognitive learning. This paper proposes SpatialHD, the first framework that combines spatial transformer networks (STN) and HDC. First, SpatialHD exploits the STN, which explicitly allows the spatial manipulation of data within the network. Then, it employs HDC to operate over STN output by mapping feature maps into high-dimensional space, learning abstracted information, and classifying data. In addition, the STN output is resized to generate a smaller input feature map. This further reduces computing complexity and memory storage compared to HDC alone. Finally, to test the model's functionality, we applied spatial HD for image classification, utilizing the MNIST and Fashion-MNIST datasets, using only 25% of the dataset for training. Our results show that SpatialHD improves accuracy by ≈ 8% and enhances efficiency by approximately 2.5x compared to base-HDC.
KW - Hyperdimensional Computing
KW - Image Classification
KW - Spatial Transformers
UR - https://www.scopus.com/pages/publications/85166364794
U2 - 10.1109/AICAS57966.2023.10168629
DO - 10.1109/AICAS57966.2023.10168629
M3 - Conference contribution
AN - SCOPUS:85166364794
T3 - AICAS 2023 - IEEE International Conference on Artificial Intelligence Circuits and Systems, Proceeding
BT - AICAS 2023 - IEEE International Conference on Artificial Intelligence Circuits and Systems, Proceeding
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 11 June 2023 through 13 June 2023
ER -