A Stacked LSTM-Based Approach for Reducing Semantic Pose Estimation Error

Rana Azzam, Yusra Alkendi, Tarek Taha, Shoudong Huang, Yahya Zweiri

Research output: Contribution to journalArticlepeer-review

14 Scopus citations


Achieving high estimation accuracy is significant for semantic simultaneous localization and mapping (SLAM) tasks. Yet, the estimation process is vulnerable to several sources of error, including limitations of the instruments used to perceive the environment, shortcomings of the employed algorithm, environmental conditions, or other unpredictable noise. In this article, a novel stacked long short-term memory (LSTM)-based error reduction approach is developed to enhance the accuracy of semantic SLAM in presence of such error sources. Training and testing data sets were constructed through simulated and real-time experiments. The effectiveness of the proposed approach was demonstrated by its ability to capture and reduce semantic SLAM estimation errors in training and testing data sets. Quantitative performance measurement was carried out using the absolute trajectory error (ATE) metric. The proposed approach was compared with vanilla and bidirectional LSTM networks, shallow and deep neural networks, and support vector machines. The proposed approach outperforms all other structures and was able to significantly improve the accuracy of semantic SLAM. To further verify the applicability of the proposed approach, it was tested on real-time sequences from the TUM RGB-D data set, where it was able to improve the estimated trajectories.

Original languageBritish English
Article number9235399
JournalIEEE Transactions on Instrumentation and Measurement
StatePublished - 2021


  • Deep learning
  • localization error
  • long short-term memory (LSTM)
  • measurement uncertainty
  • semantic simultaneous localization and mapping (SLAM)
  • sensor noise


Dive into the research topics of 'A Stacked LSTM-Based Approach for Reducing Semantic Pose Estimation Error'. Together they form a unique fingerprint.

Cite this