TY - JOUR
T1 - Efficient event-based robotic grasping perception using hyperdimensional computing
AU - Hassan, Eman
AU - Zou, Zhuowen
AU - Chen, Hanning
AU - Imani, Mohsen
AU - Zweiri, Yahaya
AU - Saleh, Hani
AU - Mohammad, Baker
N1 - Publisher Copyright:
© 2024 The Author(s)
PY - 2024/7
Y1 - 2024/7
N2 - Grasping is fundamental in various robotic applications, particularly within industrial contexts. Accurate inference of object properties is a crucial step toward enhancing grasping quality. Dynamic and Active Vision Sensors (DAVIS), increasingly utilized for robotic grasping, offer superior energy efficiency, lower latency, and higher temporal resolution than traditional cameras. However, the data they generate can be complex and noisy, necessitating substantial preprocessing. In response to these challenges, we introduce GraspHD, an innovative end-to-end algorithm that leverages brain-inspired hyperdimensional computing (HDC) to learn about the size and hardness of objects and estimate the grasping force. This novel approach circumvents the need for resource-intensive pre-processing steps, capitalizing on the simplicity and inherent parallelism of HDC operations. Our comprehensive analysis reveals that GraspHD surpasses state-of-the-art approaches in terms of overall classification accuracy. We have also implemented GraspHD on an FPGA to evaluate system efficiency. The results demonstrate that GraspHD operates at a speed 10x faster and offers an energy efficiency 26x higher than existing learning algorithms while maintaining robust performance in noisy environments. These findings underscore the significant potential of GraspHD as a more efficient and effective solution for real-time robotic grasping applications.
AB - Grasping is fundamental in various robotic applications, particularly within industrial contexts. Accurate inference of object properties is a crucial step toward enhancing grasping quality. Dynamic and Active Vision Sensors (DAVIS), increasingly utilized for robotic grasping, offer superior energy efficiency, lower latency, and higher temporal resolution than traditional cameras. However, the data they generate can be complex and noisy, necessitating substantial preprocessing. In response to these challenges, we introduce GraspHD, an innovative end-to-end algorithm that leverages brain-inspired hyperdimensional computing (HDC) to learn about the size and hardness of objects and estimate the grasping force. This novel approach circumvents the need for resource-intensive pre-processing steps, capitalizing on the simplicity and inherent parallelism of HDC operations. Our comprehensive analysis reveals that GraspHD surpasses state-of-the-art approaches in terms of overall classification accuracy. We have also implemented GraspHD on an FPGA to evaluate system efficiency. The results demonstrate that GraspHD operates at a speed 10x faster and offers an energy efficiency 26x higher than existing learning algorithms while maintaining robust performance in noisy environments. These findings underscore the significant potential of GraspHD as a more efficient and effective solution for real-time robotic grasping applications.
KW - Artificial intelligence
KW - Dynamic vision sensor
KW - Hyperdimensional computing
KW - Neuromorphic vision
KW - Object grasping
KW - Robotics
UR - http://www.scopus.com/inward/record.url?scp=85192439772&partnerID=8YFLogxK
U2 - 10.1016/j.iot.2024.101207
DO - 10.1016/j.iot.2024.101207
M3 - Article
AN - SCOPUS:85192439772
SN - 2542-6605
VL - 26
JO - Internet of Things (Netherlands)
JF - Internet of Things (Netherlands)
M1 - 101207
ER -