Deep features based real-time SLAM

This project implements a near real-time stereo SLAM system designed to operate effectively in extreme conditions using Deep Learning methods. It employs a Parallel Tracking-and-Mapping approach, making use of stereo constraints to ensure robust initialization and accurate scale recovery while main...

全面介紹

Saved in:
書目詳細資料
主要作者: Syed Ariff Syed Hesham
其他作者: Wen Changyun
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2023
主題:
在線閱讀:https://hdl.handle.net/10356/172525
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:This project implements a near real-time stereo SLAM system designed to operate effectively in extreme conditions using Deep Learning methods. It employs a Parallel Tracking-and-Mapping approach, making use of stereo constraints to ensure robust initialization and accurate scale recovery while maintaining real-time performance. To handle various real-world challenges including dynamic illumination variations, the system integrates Convolutional Neural Networks (CNN) and Graph Neural Networks (GNN) for reliable corner point detection and matching. By optimizing the developed pipeline and integrating with CNN and GNN components, the system achieves near real-time performance. Evaluations across diverse datasets with varying illumination conditions demonstrated the developed system's superiority over traditional feature-based methods in terms of accuracy and robustness. Notably, the system's implementation in Python prioritizes extensibility, making it both easy to read and understand, at the same time encourages customization and further development in terms of research, hence it potentially fosters progress in SLAM systems for various applications. Furthermore, the project explores the system's adaptability in underwater contexts, showcasing its workability even in extreme real-world scenarios.