Robust semantic SLAM for autonomous robot
This study aimed to investigate the effectiveness of using SuperPoint in Visual Simultaneous Localization and Mapping (Visual SLAM) in the context of an autonomous robot. The wheeled autonomous mobile robot will be used in environments such as homes, warehouses or factory floors. By utilizing SuperP...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175008 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | This study aimed to investigate the effectiveness of using SuperPoint in Visual Simultaneous Localization and Mapping (Visual SLAM) in the context of an autonomous robot. The wheeled autonomous mobile robot will be used in environments such as homes, warehouses or factory floors. By utilizing SuperPoint, a feature detector and descriptor, the accuracy and reliability of the generated SLAM map are hypothesized to be significantly improved. SuperPoint allows for better feature detection, extraction and matching through machine learning, and this improves the density and consistency of the point cloud to generate a more detailed map in an unknown environment. By comparing the performance of ORB-SLAM3, a base SLAM model and Ms-Deep SLAM, which is a modified version of ORB-SLAM3 to use SuperPoint as its feature detector and descriptor, against a common dataset, the effectiveness of SuperPoint could be
measured. The test dataset is downloaded from ”The KITTI Vision Benchmark Suite” dataset, provided by Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago. The performance of the two models will be also tested under low-light conditions in anticipation of the performance of SuperPoint. |
---|