Robust RGB-D SLAM in dynamic environments for autonomous vehicles
Vision-based SLAM has played an important role in many robotic applications. However, most existing visual SLAM methods are developed under a static world assumption and the robustness in dynamic environments remains a challenging problem. In this paper, we propose a robust RGB-D SLAM system fo...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2025
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/182130 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Vision-based SLAM has played an important role
in many robotic applications. However, most existing visual
SLAM methods are developed under a static world assumption
and the robustness in dynamic environments remains a challenging
problem. In this paper, we propose a robust RGB-D
SLAM system for autonomous vehicles in dynamic scenarios
which uses geometry-only information to reduce the impact
of moving objects. To achieve this, we introduce an effective
and efficient dynamic points detection module in a featurebased
SLAM system. Specifically, for each new RGB-D image
pair, we first segment the depth image into a few regions using
the KMeans algorithm, and then identify the dynamic regions
via their reprojection errors. The feature points located in
these dynamic regions are then removed and only static ones
are used for pose estimation. A dense map that contains only
static parts of the environment is also produced by removing
dynamic regions in the keyframes. Extensive experiments on
public dataset and in real-world scenarios demonstrate that
our method provides significant improvement in localization
accuracy and mapping quality in dynamic environments. |
---|