ORB-SLAM3-YOLOv3 : a visual SLAM based on deep learning for dynamic environments

With the rapid development of artificial intelligence, robots, and autonomous driving technologies, visual SLAM technology has received extensive attention from research communities. However, the current research of visual SLAM systems is mainly based on static and simple environments, and the syste...

全面介紹

Saved in:
書目詳細資料
主要作者: Chen, Peiyu
其他作者: Xie Lihua
格式: Thesis-Master by Coursework
語言:English
出版: Nanyang Technological University 2022
主題:
在線閱讀:https://hdl.handle.net/10356/154873
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:With the rapid development of artificial intelligence, robots, and autonomous driving technologies, visual SLAM technology has received extensive attention from research communities. However, the current research of visual SLAM systems is mainly based on static and simple environments, and the system performance could be severely degraded in complex environments. Navigation and mapping in dynamic environment is a very challenging problem for autonomous robots. In this dissertation, we develop semantic SLAM by combining ORB-SLAM3 with YOLOv3 neural network. Our proposed system includes five parallel threads: semantic segmentation, tracking, local mapping, loop and map merging and ATLAS. ORB-SLAM3-YOLOv3 uses YOLOv3 to preprocess the image and segment the prior dynamic objects in frames. Then we use black mask to cover the dynamic objects to reduce the impact of the dynamic objects. Finally, we test the accuracy of the proposed system under Ubuntu 16.04. Experimental results show that our proposed method can effectively reduce the influence of dynamic objects on the TUM and KITTI dataset. The absolute trajectory accuracy in ORB-SLAM3-YOLOv3 can be improved compared with ORB-SLAM3. The computational time of our SLAM system can achieve 120ms per frame with CPU.