Robust RGB-D SLAM in dynamic environments for autonomous vehicles

Vision-based SLAM has played an important role in many robotic applications. However, most existing visual SLAM methods are developed under a static world assumption and the robustness in dynamic environments remains a challenging problem. In this paper, we propose a robust RGB-D SLAM system fo...

Full description

Saved in:
Bibliographic Details
Main Authors: Ji, Tete, Yuan, Shenghai, Xie, Lihua
Other Authors: School of Electrical and Electronic Engineering
Format: Conference or Workshop Item
Language:English
Published: 2025
Subjects:
Online Access:https://hdl.handle.net/10356/182130
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-182130
record_format dspace
spelling sg-ntu-dr.10356-1821302025-01-09T06:46:06Z Robust RGB-D SLAM in dynamic environments for autonomous vehicles Ji, Tete Yuan, Shenghai Xie, Lihua School of Electrical and Electronic Engineering 2022 17th International Conference on Control, Automation, Robotics and Vision (ICARCV) Computer and Information Science Location awareness Visualization Vision-based SLAM has played an important role in many robotic applications. However, most existing visual SLAM methods are developed under a static world assumption and the robustness in dynamic environments remains a challenging problem. In this paper, we propose a robust RGB-D SLAM system for autonomous vehicles in dynamic scenarios which uses geometry-only information to reduce the impact of moving objects. To achieve this, we introduce an effective and efficient dynamic points detection module in a featurebased SLAM system. Specifically, for each new RGB-D image pair, we first segment the depth image into a few regions using the KMeans algorithm, and then identify the dynamic regions via their reprojection errors. The feature points located in these dynamic regions are then removed and only static ones are used for pose estimation. A dense map that contains only static parts of the environment is also produced by removing dynamic regions in the keyframes. Extensive experiments on public dataset and in real-world scenarios demonstrate that our method provides significant improvement in localization accuracy and mapping quality in dynamic environments. National Research Foundation (NRF) This work was partly supported by the Center for Advanced Robotics Technology Innovation (CARTIN) and Delta-NTU Corporate Laboratory for Cyber-Physical Systems under the National Research Foundation (NRF) Singapore Corporate Laboratory@University Scheme. 2025-01-09T06:46:06Z 2025-01-09T06:46:06Z 2023 Conference Paper Ji, T., Yuan, S. & Xie, L. (2023). Robust RGB-D SLAM in dynamic environments for autonomous vehicles. 2022 17th International Conference on Control, Automation, Robotics and Vision (ICARCV), 665-671. https://dx.doi.org/10.1109/ICARCV57592.2022.10004324 978-1-6654-7687-4 https://hdl.handle.net/10356/182130 10.1109/ICARCV57592.2022.10004324 665 671 en © 2022 IEEE. All rights reserved.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Location awareness
Visualization
spellingShingle Computer and Information Science
Location awareness
Visualization
Ji, Tete
Yuan, Shenghai
Xie, Lihua
Robust RGB-D SLAM in dynamic environments for autonomous vehicles
description Vision-based SLAM has played an important role in many robotic applications. However, most existing visual SLAM methods are developed under a static world assumption and the robustness in dynamic environments remains a challenging problem. In this paper, we propose a robust RGB-D SLAM system for autonomous vehicles in dynamic scenarios which uses geometry-only information to reduce the impact of moving objects. To achieve this, we introduce an effective and efficient dynamic points detection module in a featurebased SLAM system. Specifically, for each new RGB-D image pair, we first segment the depth image into a few regions using the KMeans algorithm, and then identify the dynamic regions via their reprojection errors. The feature points located in these dynamic regions are then removed and only static ones are used for pose estimation. A dense map that contains only static parts of the environment is also produced by removing dynamic regions in the keyframes. Extensive experiments on public dataset and in real-world scenarios demonstrate that our method provides significant improvement in localization accuracy and mapping quality in dynamic environments.
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Ji, Tete
Yuan, Shenghai
Xie, Lihua
format Conference or Workshop Item
author Ji, Tete
Yuan, Shenghai
Xie, Lihua
author_sort Ji, Tete
title Robust RGB-D SLAM in dynamic environments for autonomous vehicles
title_short Robust RGB-D SLAM in dynamic environments for autonomous vehicles
title_full Robust RGB-D SLAM in dynamic environments for autonomous vehicles
title_fullStr Robust RGB-D SLAM in dynamic environments for autonomous vehicles
title_full_unstemmed Robust RGB-D SLAM in dynamic environments for autonomous vehicles
title_sort robust rgb-d slam in dynamic environments for autonomous vehicles
publishDate 2025
url https://hdl.handle.net/10356/182130
_version_ 1821237191525793792