Obstacle detection and SLAM techniques for autonomous vehicles
Autonomous robots have been thoroughly investigated over the years, driven by their diverse applications, including exploration, navigation, surveillance, and so on. To accomplish these missions, it is crucial for robots to detect and map obstacles in unknown environments. Simultaneous Localization...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/174613 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-174613 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1746132024-05-03T02:58:52Z Obstacle detection and SLAM techniques for autonomous vehicles Chen, Jiaying Anamitra Makur School of Electrical and Electronic Engineering EAMakur@ntu.edu.sg Engineering Engineering Autonomous robots have been thoroughly investigated over the years, driven by their diverse applications, including exploration, navigation, surveillance, and so on. To accomplish these missions, it is crucial for robots to detect and map obstacles in unknown environments. Simultaneous Localization And Mapping (SLAM) is a fundamental and essential subject in the realm of robotics research. It refers to the process of a robot creating a map of its environment while simultaneously determining its own location based on onboard sensors, which forms the basis for subsequent tasks such as path planning and avoiding collisions. Traditional SLAM involves robots passively collecting data and building maps with external controllers or human efforts. However, Active SLAM (A-SLAM) enables mobile robots to autonomously explore and map environments. This thesis presents three main parts: firstly, we propose a novel approach for estimating the Visual Odometry of USVs in maritime environments, fusing data from cameras and radar sensors for reliable obstacle detection. Traditional methods are inadequate due to the lack of distinguishable features in maritime environments. The proposed method utilizes a camera and radar sensor to detect and classify obstacles, providing rich visual data and all-weather performance. Secondly, we propose a LiDAR-inertial SLAM framework for ground robots, improving localization accuracy by addressing drift and motion constraints. LiDAR sensors offer direct, dense, and accurate depth measurements, making them ideal for SLAM. However, existing LiDAR-based SLAM methods often suffer from drift along the z-axis and deviate from SE(2) constraints due to rough terrain or motion vibration. The proposed framework overcomes these challenges, providing real-time performance and reliable localization for ground robots. Thirdly, we introduce an end-to-end DRL-based exploration framework for efficient navigation and exploration in large-scale environments. This framework integrates point cloud and map information to transform raw LiDAR sensor data into robot control commands, enhancing adaptability from virtual training to real-world scenarios. By addressing the challenges of exploration in large-scale environments, this framework enhances the robot's ability to navigate and explore efficiently, making it suitable for various complex scenarios. Doctor of Philosophy 2024-04-04T01:14:16Z 2024-04-04T01:14:16Z 2024 Thesis-Doctor of Philosophy Chen, J. (2024). Obstacle detection and SLAM techniques for autonomous vehicles. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/174613 https://hdl.handle.net/10356/174613 10.32657/10356/174613 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering Engineering |
spellingShingle |
Engineering Engineering Chen, Jiaying Obstacle detection and SLAM techniques for autonomous vehicles |
description |
Autonomous robots have been thoroughly investigated over the years, driven by their diverse applications, including exploration, navigation, surveillance, and so on. To accomplish these missions, it is crucial for robots to detect and map obstacles in unknown environments. Simultaneous Localization And Mapping (SLAM) is a fundamental and essential subject in the realm of robotics research. It refers to the process of a robot creating a map of its environment while simultaneously determining its own location based on onboard sensors, which forms the basis for subsequent tasks such as path planning and avoiding collisions. Traditional SLAM involves robots passively collecting data and building maps with external controllers or human efforts. However, Active SLAM (A-SLAM) enables mobile robots to autonomously explore and map environments. This thesis presents three main parts: firstly, we propose a novel approach for estimating the Visual Odometry of USVs in maritime environments, fusing data from cameras and radar sensors for reliable obstacle detection. Traditional methods are inadequate due to the lack of distinguishable features in maritime environments. The proposed method utilizes a camera and radar sensor to detect and classify obstacles, providing rich visual data and all-weather performance. Secondly, we propose a LiDAR-inertial SLAM framework for ground robots, improving localization accuracy by addressing drift and motion constraints. LiDAR sensors offer direct, dense, and accurate depth measurements, making them ideal for SLAM. However, existing LiDAR-based SLAM methods often suffer from drift along the z-axis and deviate from SE(2) constraints due to rough terrain or motion vibration. The proposed framework overcomes these challenges, providing real-time performance and reliable localization for ground robots. Thirdly, we introduce an end-to-end DRL-based exploration framework for efficient navigation and exploration in large-scale environments. This framework integrates point cloud and map information to transform raw LiDAR sensor data into robot control commands, enhancing adaptability from virtual training to real-world scenarios. By addressing the challenges of exploration in large-scale environments, this framework enhances the robot's ability to navigate and explore efficiently, making it suitable for various complex scenarios. |
author2 |
Anamitra Makur |
author_facet |
Anamitra Makur Chen, Jiaying |
format |
Thesis-Doctor of Philosophy |
author |
Chen, Jiaying |
author_sort |
Chen, Jiaying |
title |
Obstacle detection and SLAM techniques for autonomous vehicles |
title_short |
Obstacle detection and SLAM techniques for autonomous vehicles |
title_full |
Obstacle detection and SLAM techniques for autonomous vehicles |
title_fullStr |
Obstacle detection and SLAM techniques for autonomous vehicles |
title_full_unstemmed |
Obstacle detection and SLAM techniques for autonomous vehicles |
title_sort |
obstacle detection and slam techniques for autonomous vehicles |
publisher |
Nanyang Technological University |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/174613 |
_version_ |
1800916176749985792 |