Robust and light-weight simultaneous localization and mapping for autonomous vehicles
Simultaneous Localization And Mapping (SLAM) is one of the most fundamental and essential topics in robotics research. SLAM is a task for a robot to perceive the environment and localize itself based on inputs from its on-board sensors. The robot is also supposed to construct a map of the surroundin...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/151717 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-151717 |
---|---|
record_format |
dspace |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics |
spellingShingle |
Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics Wang, Han Robust and light-weight simultaneous localization and mapping for autonomous vehicles |
description |
Simultaneous Localization And Mapping (SLAM) is one of the most fundamental and essential topics in robotics research. SLAM is a task for a robot to perceive the environment and localize itself based on inputs from its on-board sensors. The robot is also supposed to construct a map of the surrounding environment for subsequent task planning and collision avoidance. As the robotic industry develops over the past decades, there are more applications depending on the performance of the SLAM system. However, a good SLAM system needs to meet the following requirements. First of all, many advanced robotic applications require localization accuracy at sub-meter level or even centimeter level, e.g., precision landing for Unmanned Aerial Vehicles (UAVs) and auto charging for Automated Guided Vehicles (AGVs). Secondly, the implementation of SLAM extends from indoor AGVs to outdoor autonomous driving cars or UAVs. As the robots move faster, the localization needs to be real-time as well. Any delay in localization result may lead to serious safety issues such as object collision or car accident. Lastly, the robotic applications are expanding from static to dynamic environments, from simple to complex environments, from short term to long term operation, etc. The SLAM framework is supposed to provide reliable localization under different scenarios and be robust to environmental changes. However, mobile robots often have limited computational resources to achieve a good SLAM performance. Motivated by this challenge, this thesis presents a unified SLAM framework with high flexibility, practicality and stability for autonomous vehicles. Specifically, we explore improvement opportunities in front-end and back-end SLAM systems separately, and then examine their integration in warehouse robots and autonomous driving cars.
In the first part of this thesis, we establish a LiDAR-based odometry to provide real-time localization for warehouse robots. Light Detection And Ranging (LiDAR) is an important sensor used for autonomous vehicles due to its high accuracy. Note that lines and planes are often distinct, we adopt an efficient feature extraction via local smoothness analysis to search for edge and planar features respectively. The extracted features are associated with global lines and planes and the robot pose is achieved by minimizing point-to-edge and point-to-plane distances. Moreover, we adopt non-iterative sensor motion estimation and distortion correction to reduce the computational cost. As a result, the framework achieves competitive localization accuracy with a processing rate of more than 10 Hz in the public dataset evaluations. It provides a good trade-off between performance and computational cost for practical applications.
A LiDAR odometry solves the localization problem for short travel distance before the measurement noise causes localization drifts in the long run. The drifting problem needs to be resolved to enable a mobile robot to run for long hours in scenarios such as continuous warehouse operation. Hence, in the second part of this thesis, we investigate the mitigation of the drifting problem with loop closure detection at the back-end. Loop closure detection is the task to identify the repetitive places from a database and re-localize the robot to eliminate the drifting error. We first explore a vision-based system for loop closure detection. Saliency analysis is introduced to identify the distinctive landmarks from the image stream. New landmarks are compared with existing landmarks to retrieve the repetitive scenes. Since the database is incremental when more places are explored, a Kernel-Cross-Correlator-based landmark retrieval is introduced to reduce the computational complexity. The experiment results show that it is faster than existing feature-based approaches while achieving similar recall rates.
The vision-based loop closure detection only explores the visual characteristic of a place, which can be easily affected by environmental changes such as illumination difference. Compared to visual information, the geometry feature is less sensitive to such variation. We next explore the LiDAR based loop closure detection. Specifically, a global novel descriptor named Intensity Scan Context (ISC) is introduced with both geometry and intensity properties. To improve the efficiency for place retrieval, an efficient two-stage hierarchical re-identification process is proposed, including binary-operation based fast geometric relation retrieval and intensity structure re-identification. The experimental results show that the LiDAR based loop closure detection achieves similar performance with vision-based approach in terms of recall rate, but has higher robustness in some difficult scenarios such as reverse visiting.
After exploring the front-end SLAM and back-end SLAM independently, we integrate the two components together to form a full SLAM solution. An intensity-assisted front-end odometry estimation and an ISC-based back-end optimization are running in parallel to provide real-time localization and global optimization. Thorough experiments are performed on both outdoor autonomous driving and indoor warehouse robot manipulation. Compared to state-of-the-art works, this work achieves the fastest processing speed while maintaining satisfactory accuracy in various environments. It is a cost-effective solution for mobile robots with limited computational power. |
author2 |
Xie Lihua |
author_facet |
Xie Lihua Wang, Han |
format |
Thesis-Doctor of Philosophy |
author |
Wang, Han |
author_sort |
Wang, Han |
title |
Robust and light-weight simultaneous localization and mapping for autonomous vehicles |
title_short |
Robust and light-weight simultaneous localization and mapping for autonomous vehicles |
title_full |
Robust and light-weight simultaneous localization and mapping for autonomous vehicles |
title_fullStr |
Robust and light-weight simultaneous localization and mapping for autonomous vehicles |
title_full_unstemmed |
Robust and light-weight simultaneous localization and mapping for autonomous vehicles |
title_sort |
robust and light-weight simultaneous localization and mapping for autonomous vehicles |
publisher |
Nanyang Technological University |
publishDate |
2021 |
url |
https://hdl.handle.net/10356/151717 |
_version_ |
1772826375825129472 |
spelling |
sg-ntu-dr.10356-1517172023-07-04T17:07:39Z Robust and light-weight simultaneous localization and mapping for autonomous vehicles Wang, Han Xie Lihua School of Electrical and Electronic Engineering Delta-NTU Corporate Laboratory ELHXIE@ntu.edu.sg Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics Simultaneous Localization And Mapping (SLAM) is one of the most fundamental and essential topics in robotics research. SLAM is a task for a robot to perceive the environment and localize itself based on inputs from its on-board sensors. The robot is also supposed to construct a map of the surrounding environment for subsequent task planning and collision avoidance. As the robotic industry develops over the past decades, there are more applications depending on the performance of the SLAM system. However, a good SLAM system needs to meet the following requirements. First of all, many advanced robotic applications require localization accuracy at sub-meter level or even centimeter level, e.g., precision landing for Unmanned Aerial Vehicles (UAVs) and auto charging for Automated Guided Vehicles (AGVs). Secondly, the implementation of SLAM extends from indoor AGVs to outdoor autonomous driving cars or UAVs. As the robots move faster, the localization needs to be real-time as well. Any delay in localization result may lead to serious safety issues such as object collision or car accident. Lastly, the robotic applications are expanding from static to dynamic environments, from simple to complex environments, from short term to long term operation, etc. The SLAM framework is supposed to provide reliable localization under different scenarios and be robust to environmental changes. However, mobile robots often have limited computational resources to achieve a good SLAM performance. Motivated by this challenge, this thesis presents a unified SLAM framework with high flexibility, practicality and stability for autonomous vehicles. Specifically, we explore improvement opportunities in front-end and back-end SLAM systems separately, and then examine their integration in warehouse robots and autonomous driving cars. In the first part of this thesis, we establish a LiDAR-based odometry to provide real-time localization for warehouse robots. Light Detection And Ranging (LiDAR) is an important sensor used for autonomous vehicles due to its high accuracy. Note that lines and planes are often distinct, we adopt an efficient feature extraction via local smoothness analysis to search for edge and planar features respectively. The extracted features are associated with global lines and planes and the robot pose is achieved by minimizing point-to-edge and point-to-plane distances. Moreover, we adopt non-iterative sensor motion estimation and distortion correction to reduce the computational cost. As a result, the framework achieves competitive localization accuracy with a processing rate of more than 10 Hz in the public dataset evaluations. It provides a good trade-off between performance and computational cost for practical applications. A LiDAR odometry solves the localization problem for short travel distance before the measurement noise causes localization drifts in the long run. The drifting problem needs to be resolved to enable a mobile robot to run for long hours in scenarios such as continuous warehouse operation. Hence, in the second part of this thesis, we investigate the mitigation of the drifting problem with loop closure detection at the back-end. Loop closure detection is the task to identify the repetitive places from a database and re-localize the robot to eliminate the drifting error. We first explore a vision-based system for loop closure detection. Saliency analysis is introduced to identify the distinctive landmarks from the image stream. New landmarks are compared with existing landmarks to retrieve the repetitive scenes. Since the database is incremental when more places are explored, a Kernel-Cross-Correlator-based landmark retrieval is introduced to reduce the computational complexity. The experiment results show that it is faster than existing feature-based approaches while achieving similar recall rates. The vision-based loop closure detection only explores the visual characteristic of a place, which can be easily affected by environmental changes such as illumination difference. Compared to visual information, the geometry feature is less sensitive to such variation. We next explore the LiDAR based loop closure detection. Specifically, a global novel descriptor named Intensity Scan Context (ISC) is introduced with both geometry and intensity properties. To improve the efficiency for place retrieval, an efficient two-stage hierarchical re-identification process is proposed, including binary-operation based fast geometric relation retrieval and intensity structure re-identification. The experimental results show that the LiDAR based loop closure detection achieves similar performance with vision-based approach in terms of recall rate, but has higher robustness in some difficult scenarios such as reverse visiting. After exploring the front-end SLAM and back-end SLAM independently, we integrate the two components together to form a full SLAM solution. An intensity-assisted front-end odometry estimation and an ISC-based back-end optimization are running in parallel to provide real-time localization and global optimization. Thorough experiments are performed on both outdoor autonomous driving and indoor warehouse robot manipulation. Compared to state-of-the-art works, this work achieves the fastest processing speed while maintaining satisfactory accuracy in various environments. It is a cost-effective solution for mobile robots with limited computational power. Doctor of Philosophy 2021-06-28T07:17:46Z 2021-06-28T07:17:46Z 2021 Thesis-Doctor of Philosophy Wang, H. (2021). Robust and light-weight simultaneous localization and mapping for autonomous vehicles. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/151717 https://hdl.handle.net/10356/151717 10.32657/10356/151717 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University |