Obstacle detection and SLAM techniques for autonomous vehicles

Autonomous robots have been thoroughly investigated over the years, driven by their diverse applications, including exploration, navigation, surveillance, and so on. To accomplish these missions, it is crucial for robots to detect and map obstacles in unknown environments. Simultaneous Localization...

Full description

Saved in:
Bibliographic Details
Main Author: Chen, Jiaying
Other Authors: Anamitra Makur
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/174613
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Autonomous robots have been thoroughly investigated over the years, driven by their diverse applications, including exploration, navigation, surveillance, and so on. To accomplish these missions, it is crucial for robots to detect and map obstacles in unknown environments. Simultaneous Localization And Mapping (SLAM) is a fundamental and essential subject in the realm of robotics research. It refers to the process of a robot creating a map of its environment while simultaneously determining its own location based on onboard sensors, which forms the basis for subsequent tasks such as path planning and avoiding collisions. Traditional SLAM involves robots passively collecting data and building maps with external controllers or human efforts. However, Active SLAM (A-SLAM) enables mobile robots to autonomously explore and map environments. This thesis presents three main parts: firstly, we propose a novel approach for estimating the Visual Odometry of USVs in maritime environments, fusing data from cameras and radar sensors for reliable obstacle detection. Traditional methods are inadequate due to the lack of distinguishable features in maritime environments. The proposed method utilizes a camera and radar sensor to detect and classify obstacles, providing rich visual data and all-weather performance. Secondly, we propose a LiDAR-inertial SLAM framework for ground robots, improving localization accuracy by addressing drift and motion constraints. LiDAR sensors offer direct, dense, and accurate depth measurements, making them ideal for SLAM. However, existing LiDAR-based SLAM methods often suffer from drift along the z-axis and deviate from SE(2) constraints due to rough terrain or motion vibration. The proposed framework overcomes these challenges, providing real-time performance and reliable localization for ground robots. Thirdly, we introduce an end-to-end DRL-based exploration framework for efficient navigation and exploration in large-scale environments. This framework integrates point cloud and map information to transform raw LiDAR sensor data into robot control commands, enhancing adaptability from virtual training to real-world scenarios. By addressing the challenges of exploration in large-scale environments, this framework enhances the robot's ability to navigate and explore efficiently, making it suitable for various complex scenarios.