Machine learning for LiDAR-based place recognition

Simultaneous Localization and Mapping (SLAM) is one of the most essential techniques in many real-world robotic applications. The assumption of static environments is common in most SLAM algorithms, which however, is not the case for most applications. Recent work on semantic SLAM aims to understand...

Full description

Saved in:
Bibliographic Details
Main Author: Ko, Jing Ying
Other Authors: Xie Lihua
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/149724
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-149724
record_format dspace
spelling sg-ntu-dr.10356-1497242023-07-07T18:25:36Z Machine learning for LiDAR-based place recognition Ko, Jing Ying Xie Lihua School of Electrical and Electronic Engineering Delta-NTU Corporate Laboratory ELHXIE@ntu.edu.sg Engineering::Electrical and electronic engineering Simultaneous Localization and Mapping (SLAM) is one of the most essential techniques in many real-world robotic applications. The assumption of static environments is common in most SLAM algorithms, which however, is not the case for most applications. Recent work on semantic SLAM aims to understand the objects in an environment and distinguish dynamic information from a scene context by performing image-based segmentation. However, the segmentation results are often imperfect or incomplete, which can subsequently reduce the quality of mapping and the accuracy of localization. In this Final Year Project, a robust multi-modal semantic framework is presented to solve the SLAM problem in complex and highly dynamic environments. A more powerful object feature representation learning is proposed and the mechanism of re-looking and re-thinking is deployed to the backbone network, which leads to a better segmentation result to the adopted baseline instance segmentation model. Moreover, both geometric-only clustering and visual semantic information are combined to reduce the effect of segmentation error due to small-scale objects, occlusion and motion blur. Thorough experiments have been carried out to evaluate the effectiveness of the proposed multi-modal semantic SLAM method. The experimental results indicate that the proposed SLAM system can precisely identify dynamic objects under recognition imperfection and motion blur. Moreover, the proposed SLAM framework is able to efficiently build a static dense map at a processing rate of more than 10 Hz, which can be implemented in many practical applications. Bachelor of Engineering (Electrical and Electronic Engineering) 2021-06-09T07:06:35Z 2021-06-09T07:06:35Z 2021 Final Year Project (FYP) Ko, J. Y. (2021). Machine learning for LiDAR-based place recognition. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/149724 https://hdl.handle.net/10356/149724 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
spellingShingle Engineering::Electrical and electronic engineering
Ko, Jing Ying
Machine learning for LiDAR-based place recognition
description Simultaneous Localization and Mapping (SLAM) is one of the most essential techniques in many real-world robotic applications. The assumption of static environments is common in most SLAM algorithms, which however, is not the case for most applications. Recent work on semantic SLAM aims to understand the objects in an environment and distinguish dynamic information from a scene context by performing image-based segmentation. However, the segmentation results are often imperfect or incomplete, which can subsequently reduce the quality of mapping and the accuracy of localization. In this Final Year Project, a robust multi-modal semantic framework is presented to solve the SLAM problem in complex and highly dynamic environments. A more powerful object feature representation learning is proposed and the mechanism of re-looking and re-thinking is deployed to the backbone network, which leads to a better segmentation result to the adopted baseline instance segmentation model. Moreover, both geometric-only clustering and visual semantic information are combined to reduce the effect of segmentation error due to small-scale objects, occlusion and motion blur. Thorough experiments have been carried out to evaluate the effectiveness of the proposed multi-modal semantic SLAM method. The experimental results indicate that the proposed SLAM system can precisely identify dynamic objects under recognition imperfection and motion blur. Moreover, the proposed SLAM framework is able to efficiently build a static dense map at a processing rate of more than 10 Hz, which can be implemented in many practical applications.
author2 Xie Lihua
author_facet Xie Lihua
Ko, Jing Ying
format Final Year Project
author Ko, Jing Ying
author_sort Ko, Jing Ying
title Machine learning for LiDAR-based place recognition
title_short Machine learning for LiDAR-based place recognition
title_full Machine learning for LiDAR-based place recognition
title_fullStr Machine learning for LiDAR-based place recognition
title_full_unstemmed Machine learning for LiDAR-based place recognition
title_sort machine learning for lidar-based place recognition
publisher Nanyang Technological University
publishDate 2021
url https://hdl.handle.net/10356/149724
_version_ 1772827478661791744