TransHD: Spatial Transformer Features Extraction for HDC Synergetic Learning

Research output: Contribution to journalArticlepeer-review

Abstract

Artificial intelligence (AI) relies on pattern recognition and classification algorithms to achieve accurate and efficient decision-making. Convolutional neural networks (CNNs) are the state-of-the-art for processing 2D image data, but their high computational and data requirements limit their use in resource-constrained environments. Hyperdimensional computing (HDC) offers a lightweight alternative, excelling in 1D tasks, but struggles to achieve competitive accuracy for 2D image classification. Hybrid frameworks combining HDC with CNNs have been proposed to improve accuracy but inherit the computational demands of deep learning, making them unsuitable for edge and IoT devices. To address this, we propose TransHD, a framework integrating lightweight spatial transformer networks (STNs) with HDC to enhance image classification performance while maintaining efficiency. TransHD achieves up to 9% higher accuracy than base HDC models on MNIST and Fashion-MNIST using only 30% of the training data, and reduces computational complexity by 2.5x through optimized STN feature map usage. On resource-constrained platforms like the Raspberry Pi 4, TransHD accelerates inference times by 3.4x and improves energy efficiency by 3.2x, with an accuracy trade-off of approximately 3% compared to CNNs. This study demonstrates the potential of combining HDC with STNs to develop efficient AI solutions for IoT and edge computing, where low energy consumption and computational efficiency are critical.

Original languageBritish English
JournalIEEE Transactions on Artificial Intelligence
DOIs
StateAccepted/In press - 2025

Keywords

  • Brain-inspired Computing
  • Hyperdimensional Computing
  • Image Classification
  • Spatial Transformers

Fingerprint

Dive into the research topics of 'TransHD: Spatial Transformer Features Extraction for HDC Synergetic Learning'. Together they form a unique fingerprint.

Cite this