Context-aware techniques and decision-making in autonomous mobile robot path planning

Navigation in an autonomous mobile robot requires many aspects – mapping, localization, path planning, and obstacle avoidance. Firstly, the robot requires information about the environment (context awareness) and where its current location (localization). This information is gathered by sensors such...

Full description

Saved in:
Bibliographic Details
Main Author: Lim, Jia Sheng
Other Authors: Andy Khong W H
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/167736
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Navigation in an autonomous mobile robot requires many aspects – mapping, localization, path planning, and obstacle avoidance. Firstly, the robot requires information about the environment (context awareness) and where its current location (localization). This information is gathered by sensors such as a camera system utilizing RGB or depth camera, Light Detection and Ranging (LiDAR), and/or Radar Detection and Ranging (RADAR). In an indoor environment, it is necessary for this process to be continuous where SLAM thrives. These agents utilizing SLAM primarily try to construct an accurate and precise indoor environment map while simultaneously plotting its orientation and position in this mapping. There are three main SLAM techniques used in the industry to date, Visual SLAM (VSLAM), LiDAR SLAM, and RADAR SLAM. Visual SLAM makes use of a camera system to capture continuous images of the environment to plot a semantic map and localization of itself in the environment by using computer vision to process these images. However, the camera system utilizes a sensor that is sensitive to low ambient light which introduces noise in an unlit scenario. However, VSLAM is cheap and mapping data is accurate for small indoor environments. LiDAR SLAM makes use of laser diodes to emit laser light to surrounding obstacles and receives reflected laser from the obstacle to generate a geometry of the environment, which is a 3D mapping, and localization is estimated and achieved by integrating this data with an odometry sensor or inertial measurement unit (IMU). For LiDAR, it cannot detect objects past a certain opaqueness, such as heavy rain and heavy smoke particles, but has a huge detection range with high-end LiDAR sensors range of up to 120 meters. RADAR, like LiDAR, instead of using laser beams, emits microwaves to the surrounding obstacles and the measured distance between the obstacle and the sensor is used to build a mapping and estimate the robot’s position and orientation (pose) relative to the map. For RADAR, it does not provide accurate mapping as compared to the alternatives, but the strength of RADAR is to ability to penetrate mediums with opaqueness and thickness. With each SLAM technique having the ability to alleviate the weakness of one another, we are investigating the possibility of combining the use of different SLAMs to achieve an ideal navigation system. To do so, we need to study some of the presently available SLAM techniques developed to date and how SLAM affects the path-planning process and dynamic obstacle avoidance. To provide a cost-effective way to study these navigation and SLAM techniques, we have used a photorealistic and physic-accurate simulation platform, Nvidia Isaac Sim to study the effect of different conditions such as how dynamic objects impact the navigation system of a mobile robot. With the simulation platform, researchers can make use of the in-built Python scripting feature to script any form of SLAM techniques or import existing SLAM techniques to conduct studies or development of SLAM or any environment they wish to test the SLAM techniques on and subsequently translate the techniques developed to the physical robot.