TY - GEN
T1 - Adapting Spatial Transformer Networks Across Diverse Hardware Platforms
T2 - 6th IEEE International Conference on AI Circuits and Systems, AICAS 2024
AU - Bettayeb, Meriem
AU - Hassan, Eman
AU - Khan, Muhammad Umair
AU - Halawani, Yasmin
AU - Saleh, Hani
AU - Mohammad, Baker
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - The field of artificial intelligence (AI) holds a variety of algorithms designed with the goal of achieving high accuracy at low computational cost and latency. One popular algorithm is the vision transformer (ViT), which excels at various computer vision tasks for its ability to capture long-range dependencies effectively. This paper analyzes a computing paradigm, namely, spatial transformer networks (STN), in terms of accuracy and hardware complexity for image classification tasks. The paper reveals that for 2D applications, such as image recognition and classification, STN is a great backbone for AI algorithms for its efficiency and fast inference time. This framework offers a promising solution for efficient and accurate AI for resource-constrained Internet of Things (IoT) and edge devices. The comparative analysis of STN implementations on the central processing unit (CPU), Raspberry Pi (RPi), and Resistive Random Access Memory (RRAM) architectures reveals nuanced performance variations, providing valuable insights into their respective computational efficiency and energy utilization.
AB - The field of artificial intelligence (AI) holds a variety of algorithms designed with the goal of achieving high accuracy at low computational cost and latency. One popular algorithm is the vision transformer (ViT), which excels at various computer vision tasks for its ability to capture long-range dependencies effectively. This paper analyzes a computing paradigm, namely, spatial transformer networks (STN), in terms of accuracy and hardware complexity for image classification tasks. The paper reveals that for 2D applications, such as image recognition and classification, STN is a great backbone for AI algorithms for its efficiency and fast inference time. This framework offers a promising solution for efficient and accurate AI for resource-constrained Internet of Things (IoT) and edge devices. The comparative analysis of STN implementations on the central processing unit (CPU), Raspberry Pi (RPi), and Resistive Random Access Memory (RRAM) architectures reveals nuanced performance variations, providing valuable insights into their respective computational efficiency and energy utilization.
KW - artificial intelligence
KW - hardware platforms
KW - Image Classification
KW - raspberry Pi
KW - Spatial Transformer Network
KW - vision transformer
UR - http://www.scopus.com/inward/record.url?scp=85199913425&partnerID=8YFLogxK
U2 - 10.1109/AICAS59952.2024.10595915
DO - 10.1109/AICAS59952.2024.10595915
M3 - Conference contribution
AN - SCOPUS:85199913425
T3 - 2024 IEEE 6th International Conference on AI Circuits and Systems, AICAS 2024 - Proceedings
SP - 547
EP - 551
BT - 2024 IEEE 6th International Conference on AI Circuits and Systems, AICAS 2024 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 22 April 2024 through 25 April 2024
ER -