TY - GEN
T1 - Predicting Interaction Shape of Soft Continuum Robots using Deep Visual Models
AU - Huang, Yunqi
AU - Alkayas, Abdulaziz Y.
AU - Shi, Jialei
AU - Renda, Federico
AU - Wurdemann, Helge
AU - Thuruthel, Thomas George
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Soft continuum robots, characterized by their inherent compliance and dexterity, are increasingly pivotal in applications requiring delicate interactions with the environment such as the medical field. Despite their advantages, challenges persist in accurately modeling and controlling their shape during interactions with surrounding objects. This is because of the difficulty in modeling the large degrees of freedom in soft-bodied objects that become more active during interactions. In this study, we present a deep visual model to predict the interaction shapes of a soft continuum robot in contact with surrounding objects. By formulating this task as a forward-statics problem, the model uses the initial state images containing the object configuration and future actuation values to predict interactive state images of the robot under this actuation condition. We developed and tested the model in both simulated and physical environments, explored the model's predictive capabilities using monocular and binocular views, and tested the model's generalization ability on different datasets. Our results show that deep learning methods are a promising tool for solving the complex problem of predicting the shape of a soft continuum robot interacting with the environment, requiring no prior knowledge about the system dynamics and explicit mapping of the environment. This study paves the way for future explorations in robot-environment interaction modeling and the development of more adaptable interaction shape control strategies.
AB - Soft continuum robots, characterized by their inherent compliance and dexterity, are increasingly pivotal in applications requiring delicate interactions with the environment such as the medical field. Despite their advantages, challenges persist in accurately modeling and controlling their shape during interactions with surrounding objects. This is because of the difficulty in modeling the large degrees of freedom in soft-bodied objects that become more active during interactions. In this study, we present a deep visual model to predict the interaction shapes of a soft continuum robot in contact with surrounding objects. By formulating this task as a forward-statics problem, the model uses the initial state images containing the object configuration and future actuation values to predict interactive state images of the robot under this actuation condition. We developed and tested the model in both simulated and physical environments, explored the model's predictive capabilities using monocular and binocular views, and tested the model's generalization ability on different datasets. Our results show that deep learning methods are a promising tool for solving the complex problem of predicting the shape of a soft continuum robot interacting with the environment, requiring no prior knowledge about the system dynamics and explicit mapping of the environment. This study paves the way for future explorations in robot-environment interaction modeling and the development of more adaptable interaction shape control strategies.
UR - https://www.scopus.com/pages/publications/85216494849
U2 - 10.1109/IROS58592.2024.10801261
DO - 10.1109/IROS58592.2024.10801261
M3 - Conference contribution
AN - SCOPUS:85216494849
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 11381
EP - 11387
BT - 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2024
Y2 - 14 October 2024 through 18 October 2024
ER -