Combined 2D and 3D features for robust RGB-D visual odometry

As a novel type of sensor, RGB-D cameras have attracted substantial research at tention in indoor SLAM because they can provide both RGB and depth informa tion. Currently, most existing mature RGB-D SLAM solutions are keypoint-based, which su↵er from significant performance degradation in texturele...

Full description

Saved in:
Bibliographic Details
Main Author: Cai, Pei
Other Authors: Xie Lihua
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/170197
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-170197
record_format dspace
spelling sg-ntu-dr.10356-1701972023-09-04T01:02:58Z Combined 2D and 3D features for robust RGB-D visual odometry Cai, Pei Xie Lihua School of Electrical and Electronic Engineering ELHXIE@ntu.edu.sg Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics As a novel type of sensor, RGB-D cameras have attracted substantial research at tention in indoor SLAM because they can provide both RGB and depth informa tion. Currently, most existing mature RGB-D SLAM solutions are keypoint-based, which su↵er from significant performance degradation in textureless scenes due to the lack of keypoints. Some works attempt to address this issue by incorporating line features. However, these methods still extract line features merely based on 2D RGB images, resulting in a restricted utilization of the environment’s 3D structural information and therefore providing only limited performance improvement. This project focuses on the fusion of 2D and 3D features for a robust RGB-D SLAM system. The proposed visual odometry extracts point, line, and surface features in the front-end to fully utilize the environment’s texture and structural information. In the back-end, a combination of loosely-coupled and tightly-coupled schemes is designed for multiple features to ensure both robustness and scalability of the system. Compared to existing state-of-the-art RGB-D SLAM systems, the e↵ectiveness and robustness of the proposed method is verified by experimental results. The proposed approach performs well both in scenes with limited texture or illumination variations and common scenes. Master of Science (Computer Control and Automation) 2023-08-31T08:31:02Z 2023-08-31T08:31:02Z 2023 Thesis-Master by Coursework Cai, P. (2023). Combined 2D and 3D features for robust RGB-D visual odometry. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/170197 https://hdl.handle.net/10356/170197 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics
spellingShingle Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics
Cai, Pei
Combined 2D and 3D features for robust RGB-D visual odometry
description As a novel type of sensor, RGB-D cameras have attracted substantial research at tention in indoor SLAM because they can provide both RGB and depth informa tion. Currently, most existing mature RGB-D SLAM solutions are keypoint-based, which su↵er from significant performance degradation in textureless scenes due to the lack of keypoints. Some works attempt to address this issue by incorporating line features. However, these methods still extract line features merely based on 2D RGB images, resulting in a restricted utilization of the environment’s 3D structural information and therefore providing only limited performance improvement. This project focuses on the fusion of 2D and 3D features for a robust RGB-D SLAM system. The proposed visual odometry extracts point, line, and surface features in the front-end to fully utilize the environment’s texture and structural information. In the back-end, a combination of loosely-coupled and tightly-coupled schemes is designed for multiple features to ensure both robustness and scalability of the system. Compared to existing state-of-the-art RGB-D SLAM systems, the e↵ectiveness and robustness of the proposed method is verified by experimental results. The proposed approach performs well both in scenes with limited texture or illumination variations and common scenes.
author2 Xie Lihua
author_facet Xie Lihua
Cai, Pei
format Thesis-Master by Coursework
author Cai, Pei
author_sort Cai, Pei
title Combined 2D and 3D features for robust RGB-D visual odometry
title_short Combined 2D and 3D features for robust RGB-D visual odometry
title_full Combined 2D and 3D features for robust RGB-D visual odometry
title_fullStr Combined 2D and 3D features for robust RGB-D visual odometry
title_full_unstemmed Combined 2D and 3D features for robust RGB-D visual odometry
title_sort combined 2d and 3d features for robust rgb-d visual odometry
publisher Nanyang Technological University
publishDate 2023
url https://hdl.handle.net/10356/170197
_version_ 1779156769028702208