Fast semantic-aware motion state detection for visual SLAM in dynamic environment
Existing visual SLAM (vSLAM) systems fail to perform well in dynamic environments as they cannot effectively ignore moving objects during pose estimation and mapping. We propose a lightweight approach to improve the robustness of existing feature based RGB-D and stereo vSLAM by accurately removing d...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/178580 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Existing visual SLAM (vSLAM) systems fail to perform well in dynamic environments as they cannot effectively ignore moving objects during pose estimation and mapping. We propose a lightweight approach to improve the robustness of existing feature based RGB-D and stereo vSLAM by accurately removing dynamic outliers in the scene that contribute to failures in pose estimation and mapping. First, a novel motion state detection algorithm using the depth and feature flow information is presented to identify regions in the scene with high moving probability. This information is then fused with semantic cues via a probability framework to enable accurate and robust moving object extraction to retain the useful features for pose estimation and mapping. To reduce the computational complexity of extracting semantic information in every frame, we propose to extract semantics only on keyframes with significant changes in image content. Semantic propagation is used to compensate for the changes in the intermediate frames (i.e., non-keyframes). This is achieved by computing the dense transformation map using the available feature flow vectors. The proposed techniques can be integrated into existing vSLAM systems to increase their robustness in dynamic environments without incurring much computation cost. Our work highlights the importance of distinguishing between motion states of potential moving objects for vSLAM in highly dynamic environments. We provide extensive experimental results on four well-known RGB-D and stereo datasets to show that the proposed technique outperforms existing vSLAM methods in indoor and outdoor environments under various dynamic scenarios including crowded scenes. We also perform our experiments on a low-cost embedded platform, i.e., Jetson TX1, to demonstrate the computational efficiency of our method. |
---|