Hybrid sensor fusion for unmanned ground vehicle
The unmanned ground vehicles (UGVs) have been applied to execute many important tasks in the real world scenarios such as surveillance, exploring the hazard environment and autonomous transportation. The UGV is a complex system as it is integrated by several challenge technologies, such as simultan...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/144485 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-144485 |
---|---|
record_format |
dspace |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision |
spellingShingle |
Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Guan, Mingyang Hybrid sensor fusion for unmanned ground vehicle |
description |
The unmanned ground vehicles (UGVs) have been applied to execute many important tasks in the real world scenarios such as surveillance, exploring the hazard environment and autonomous transportation.
The UGV is a complex system as it is integrated by several challenge technologies, such as simultaneously localization and mapping (SLAM), collision-free navigation, and robotic perception.
Generally, the navigation and control of UGVs in the Global Positioning System (GPS) denied environment (i.e., indoor scenario) are critically dependent on the SLAM system which provides localization service for UGVs, while the robotic perception endows UGVs the ability of understanding their surrounding environments, such as continuously tracking the moving obstacles and then filtering them out in the localization process.
In this thesis, we concentrate on the two topics involving autonomously robotic systems, say SLAM and visual object tracking.
The first part of this thesis focuses on visual object tracking,
which is to generally estimate the motion state of the given target based on its appearance information.
Though many promising tracking models have been proposed in the recent decade, some challenges are still waiting to be addressed, such as computational efficiency, tracking model drift due to illumination variation, motion blur, occlusion and deformation.
Therefore, we address these issues by proposing two trackers: 1) Event-triggered tracking (ETT) framework which attempts to enable the tracking task to be carried out by an efficient short-term tracker (i.e., correlation filter based tracker) in most of the time while triggers to restore the short-term tracker once it fails to track the target, thus a balance between tracking accuracy and efficiency is achieved; 2) reliability re-determinative correlation filter (RRCF) which aims to take advantages from multiple feature representations to robustify the tracking model. Meanwhile, we propose two different weight solvers to adaptively adjust the importance of each feature.
Extensive experiments have been designed on several large datasets to validate that: 1) the proposed tracking framework is superior to enhance the robustness of tracking model, 2) the proposed two weight solvers can effectively find the optimal weight for each feature.
As expected, the proposed two trackers indeed improve the accuracy and robustness compared to the state-of-the-art trackers.
Especially on VOT2016, the proposed RRCF achieves an outstanding EAO scores of \textbf{0.453}, which outperforms the recent top trackers by a large margin.
The second part of this thesis considers the issue of SLAM.
Typically, the SLAM system relies on the information collected from the sensors, like LiDAR, camera and IMU, which either suffers from accumulated localization error due to the lack of global reference, or requires more time to detect the loop closure yet reduces the efficiency.
To handle the issue of error accumulation, we propose to integrate several low-cost radio frequency technology based sensors (i.e., ultra-wideband (UWB)) and LiDAR/Camera to construct a fusion SLAM for the GPS-denied environment.
We propose to fuse the peer-to-peer ranges measured among UWB nodes and laser scanning information, i.e. range measured between robot and nearby objects/obstacles, for simultaneous localization of the robot, all UWB beacons and LiDAR mapping.
The fusion is inspired by two facts:
1) LiDAR may improve UWB-only localization accuracy as it gives a more precise and comprehensive picture of the surrounding environment;
2) on the other hand, UWB ranging measurements may remove the error accumulated in the LiDAR-based SLAM algorithm.
More importantly, two different fusion schemes, named one-step optimization \footnote{video in workshop: \url{https://youtu.be/yZIK37ykTGI}} and step-by-step optimization \footnote{video in workshop: \url{https://youtu.be/depguH_h2AM}}$^{,}$\footnote{video in garden: \url{https://youtu.be/FQQBuIuid2s}}, are proposed in this thesis to tightly fuse UWB ranges with LiDAR scanning.
The experiments demonstrate that UWB/LiDAR fusion enables drift-free SLAM in real-time based on ranging measurements only.
Furthermore, since the established UWB/LiDAR fusion SLAM system not only provide drift-free localization service for UGVs, but also sketch an abstract map (i.e., to-be-explored region) about the environment, a fully autonomous exploration system $^{2,3}$ is built upon a UWB/LiDAR fusion SLAM.
A where-to-explore scheme is proposed to guide the robot to the less-explored areas, which is implemented together with a collision-free navigation system and global path planning module.
With such modules, the robot is endowed with ability of autonomously exploring an environment and build the detailed map for it.
In the navigation process, we use UWB beacons, whose locations are estimated on the fly, to sketch the region where the robot is going to explore.
In the process of mapping, UWB sensors equipped on the robot provide real-time location estimates which help remove the accumulated errors in the LiDAR-only SLAM.
Experiments are conducted in two different environments, a cluttered workshop and a spacious garden, to verify the effectiveness of our proposed strategy.
The experimental tests involving UWB/LiDAR fusion SLAM and autonomous exploration are filmed $^{2,3}$. |
author2 |
Wen Changyun |
author_facet |
Wen Changyun Guan, Mingyang |
format |
Thesis-Doctor of Philosophy |
author |
Guan, Mingyang |
author_sort |
Guan, Mingyang |
title |
Hybrid sensor fusion for unmanned ground vehicle |
title_short |
Hybrid sensor fusion for unmanned ground vehicle |
title_full |
Hybrid sensor fusion for unmanned ground vehicle |
title_fullStr |
Hybrid sensor fusion for unmanned ground vehicle |
title_full_unstemmed |
Hybrid sensor fusion for unmanned ground vehicle |
title_sort |
hybrid sensor fusion for unmanned ground vehicle |
publisher |
Nanyang Technological University |
publishDate |
2020 |
url |
https://hdl.handle.net/10356/144485 |
_version_ |
1772828646040403968 |
spelling |
sg-ntu-dr.10356-1444852023-07-04T15:42:09Z Hybrid sensor fusion for unmanned ground vehicle Guan, Mingyang Wen Changyun School of Electrical and Electronic Engineering ECYWEN@ntu.edu.sg Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision The unmanned ground vehicles (UGVs) have been applied to execute many important tasks in the real world scenarios such as surveillance, exploring the hazard environment and autonomous transportation. The UGV is a complex system as it is integrated by several challenge technologies, such as simultaneously localization and mapping (SLAM), collision-free navigation, and robotic perception. Generally, the navigation and control of UGVs in the Global Positioning System (GPS) denied environment (i.e., indoor scenario) are critically dependent on the SLAM system which provides localization service for UGVs, while the robotic perception endows UGVs the ability of understanding their surrounding environments, such as continuously tracking the moving obstacles and then filtering them out in the localization process. In this thesis, we concentrate on the two topics involving autonomously robotic systems, say SLAM and visual object tracking. The first part of this thesis focuses on visual object tracking, which is to generally estimate the motion state of the given target based on its appearance information. Though many promising tracking models have been proposed in the recent decade, some challenges are still waiting to be addressed, such as computational efficiency, tracking model drift due to illumination variation, motion blur, occlusion and deformation. Therefore, we address these issues by proposing two trackers: 1) Event-triggered tracking (ETT) framework which attempts to enable the tracking task to be carried out by an efficient short-term tracker (i.e., correlation filter based tracker) in most of the time while triggers to restore the short-term tracker once it fails to track the target, thus a balance between tracking accuracy and efficiency is achieved; 2) reliability re-determinative correlation filter (RRCF) which aims to take advantages from multiple feature representations to robustify the tracking model. Meanwhile, we propose two different weight solvers to adaptively adjust the importance of each feature. Extensive experiments have been designed on several large datasets to validate that: 1) the proposed tracking framework is superior to enhance the robustness of tracking model, 2) the proposed two weight solvers can effectively find the optimal weight for each feature. As expected, the proposed two trackers indeed improve the accuracy and robustness compared to the state-of-the-art trackers. Especially on VOT2016, the proposed RRCF achieves an outstanding EAO scores of \textbf{0.453}, which outperforms the recent top trackers by a large margin. The second part of this thesis considers the issue of SLAM. Typically, the SLAM system relies on the information collected from the sensors, like LiDAR, camera and IMU, which either suffers from accumulated localization error due to the lack of global reference, or requires more time to detect the loop closure yet reduces the efficiency. To handle the issue of error accumulation, we propose to integrate several low-cost radio frequency technology based sensors (i.e., ultra-wideband (UWB)) and LiDAR/Camera to construct a fusion SLAM for the GPS-denied environment. We propose to fuse the peer-to-peer ranges measured among UWB nodes and laser scanning information, i.e. range measured between robot and nearby objects/obstacles, for simultaneous localization of the robot, all UWB beacons and LiDAR mapping. The fusion is inspired by two facts: 1) LiDAR may improve UWB-only localization accuracy as it gives a more precise and comprehensive picture of the surrounding environment; 2) on the other hand, UWB ranging measurements may remove the error accumulated in the LiDAR-based SLAM algorithm. More importantly, two different fusion schemes, named one-step optimization \footnote{video in workshop: \url{https://youtu.be/yZIK37ykTGI}} and step-by-step optimization \footnote{video in workshop: \url{https://youtu.be/depguH_h2AM}}$^{,}$\footnote{video in garden: \url{https://youtu.be/FQQBuIuid2s}}, are proposed in this thesis to tightly fuse UWB ranges with LiDAR scanning. The experiments demonstrate that UWB/LiDAR fusion enables drift-free SLAM in real-time based on ranging measurements only. Furthermore, since the established UWB/LiDAR fusion SLAM system not only provide drift-free localization service for UGVs, but also sketch an abstract map (i.e., to-be-explored region) about the environment, a fully autonomous exploration system $^{2,3}$ is built upon a UWB/LiDAR fusion SLAM. A where-to-explore scheme is proposed to guide the robot to the less-explored areas, which is implemented together with a collision-free navigation system and global path planning module. With such modules, the robot is endowed with ability of autonomously exploring an environment and build the detailed map for it. In the navigation process, we use UWB beacons, whose locations are estimated on the fly, to sketch the region where the robot is going to explore. In the process of mapping, UWB sensors equipped on the robot provide real-time location estimates which help remove the accumulated errors in the LiDAR-only SLAM. Experiments are conducted in two different environments, a cluttered workshop and a spacious garden, to verify the effectiveness of our proposed strategy. The experimental tests involving UWB/LiDAR fusion SLAM and autonomous exploration are filmed $^{2,3}$. Doctor of Philosophy 2020-11-06T07:57:29Z 2020-11-06T07:57:29Z 2020 Thesis-Doctor of Philosophy Guan, M. (2020). Hybrid sensor fusion for unmanned ground vehicle. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/144485 10.32657/10356/144485 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University |