TY - GEN
T1 - Evaluating Visual-Selective Visual-Inertial Odometry
T2 - 21st International Conference on Advanced Robotics, ICAR 2023
AU - Sudevan, Vidya
AU - Zayer, Fakhreddine
AU - Javed, Sajid
AU - Karki, Hamad
AU - De Masi, Giulia
AU - Dias, Jorge
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - This paper presents an evaluation of the performance of Visual-Selective Visual-Inertial Odometry (VS-VIO), a hybrid learning-based multi-modal pose estimation framework, in the challenging underwater domain. The assessment is based on Root Mean Square (RMSE) scores for translation and rotation vectors, compared to their reference values. The underwater environment, characterized by low lighting and high turbidity due to suspended particles, poses significant challenges for pose estimation. Understanding how hybrid learning-based multi-modal frameworks perform in such conditions is crucial for improving underwater navigation and exploration. In this study, we thoroughly analyze the performance of VS-VIO and its baseline model at the sub-sequence level, focusing on pose error. Additionally, we assess various technical aspects during the inference phase, including inference speed, power consumption, GPU utilization, GPU memory usage, and temperature. All evaluations are conducted using the AQUALOC dataset. Our findings reveal that the policy network within VS-VIO exhibits the ability to dynamically reduce the utilization of the visual modality while maintaining pose estimation accuracy. However, our analysis shows no statistically significant reduction in the percentage of visual modality usage when altering the penalty factor. These insights provide valuable guidelines for enhancing the performance of hybrid learning-based multimodal pose estimation frameworks in challenging underwater environments, contributing to advancements in underwater navigation and exploration technologies.
AB - This paper presents an evaluation of the performance of Visual-Selective Visual-Inertial Odometry (VS-VIO), a hybrid learning-based multi-modal pose estimation framework, in the challenging underwater domain. The assessment is based on Root Mean Square (RMSE) scores for translation and rotation vectors, compared to their reference values. The underwater environment, characterized by low lighting and high turbidity due to suspended particles, poses significant challenges for pose estimation. Understanding how hybrid learning-based multi-modal frameworks perform in such conditions is crucial for improving underwater navigation and exploration. In this study, we thoroughly analyze the performance of VS-VIO and its baseline model at the sub-sequence level, focusing on pose error. Additionally, we assess various technical aspects during the inference phase, including inference speed, power consumption, GPU utilization, GPU memory usage, and temperature. All evaluations are conducted using the AQUALOC dataset. Our findings reveal that the policy network within VS-VIO exhibits the ability to dynamically reduce the utilization of the visual modality while maintaining pose estimation accuracy. However, our analysis shows no statistically significant reduction in the percentage of visual modality usage when altering the penalty factor. These insights provide valuable guidelines for enhancing the performance of hybrid learning-based multimodal pose estimation frameworks in challenging underwater environments, contributing to advancements in underwater navigation and exploration technologies.
KW - Hybrid CNN-LSTM Framework
KW - Multi-Modal Multi-Rate Data Fusion
KW - Pose Estimation
KW - Underwater Robotics
KW - Visual-Inertial Odometry (VIO)
UR - https://www.scopus.com/pages/publications/85185834442
U2 - 10.1109/ICAR58858.2023.10436500
DO - 10.1109/ICAR58858.2023.10436500
M3 - Conference contribution
AN - SCOPUS:85185834442
T3 - 2023 21st International Conference on Advanced Robotics, ICAR 2023
SP - 639
EP - 644
BT - 2023 21st International Conference on Advanced Robotics, ICAR 2023
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 5 December 2023 through 8 December 2023
ER -