Simulator for autonomous robot navigation

Simultaneous Localization and Mapping (SLAM), a fundamental aspect of robotics and autonomous navigation systems, is comprised of two essential components localization and mapping. Localization involves the ability to navigate and determine the position of a robot or device within an unfamilia...

Full description

Saved in:
Bibliographic Details
Main Author: Ng, Zheng Jie
Other Authors: Lam Siew Kei
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175137
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-175137
record_format dspace
spelling sg-ntu-dr.10356-1751372024-04-26T15:40:56Z Simulator for autonomous robot navigation Ng, Zheng Jie Lam Siew Kei School of Computer Science and Engineering ASSKLam@ntu.edu.sg Computer and Information Science Engineering Simulator Simultaneous Localization and Mapping (SLAM), a fundamental aspect of robotics and autonomous navigation systems, is comprised of two essential components localization and mapping. Localization involves the ability to navigate and determine the position of a robot or device within an unfamiliar environment, while mapping pertains to the creation and maintenance of a representation of the environment. Currently, the prevailing method for localization heavily relies on Global Positioning System (GPS) sensors. However, the effectiveness of GPS is often constrained to scenarios with a clear view of the sky, and it introduces significant errors when used for indoor navigation, underground exploration, or in densely built urban areas with tall buildings [1]. This limitation has spurred the exploration of alternative solutions such as Visual-SLAM. Visual-SLAM presents a promising alternative by harnessing visual information captured through cameras for localization and mapping purposes. Unlike GPS, visual-based approaches are not reliant on external signals. They can thus operate effectively in GPS-denied environments, making them particularly suited for indoor navigation, underground exploration, and autonomous vehicles navigating urban canyons [1]. The versatility of Visual-SLAM extends beyond robotics; it finds applications in augmented reality, virtual reality, and indoor positioning systems. The proposed project aims to develop a modular Graphical User Interface (GUI) tailored specifically for Visual-SLAM applications. This GUI will facilitate the visualization and analysis of various real-time Visual-SLAM algorithms, providing users with insights into their performance under different conditions. The GUI's modularity will enable easy integration with different Visual-SLAM algorithms and frameworks, fostering collaboration and innovation. Leveraging the capabilities of Gazebo GUI and the Robot Operating System (ROS), the project aims to study a user-friendly interface that simplifies the deployment and evaluation of Visual-SLAM solutions across diverse robotic platforms and simulation environments. Through this initiative, the project aims to accelerate research and development in Visual-SLAM, paving the way for enhanced navigation capabilities in robotics, augmented reality applications, and beyond Bachelor's degree 2024-04-22T04:11:39Z 2024-04-22T04:11:39Z 2024 Final Year Project (FYP) Ng, Z. J. (2024). Simulator for autonomous robot navigation. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175137 https://hdl.handle.net/10356/175137 en SCSE23-0144 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Engineering
Simulator
spellingShingle Computer and Information Science
Engineering
Simulator
Ng, Zheng Jie
Simulator for autonomous robot navigation
description Simultaneous Localization and Mapping (SLAM), a fundamental aspect of robotics and autonomous navigation systems, is comprised of two essential components localization and mapping. Localization involves the ability to navigate and determine the position of a robot or device within an unfamiliar environment, while mapping pertains to the creation and maintenance of a representation of the environment. Currently, the prevailing method for localization heavily relies on Global Positioning System (GPS) sensors. However, the effectiveness of GPS is often constrained to scenarios with a clear view of the sky, and it introduces significant errors when used for indoor navigation, underground exploration, or in densely built urban areas with tall buildings [1]. This limitation has spurred the exploration of alternative solutions such as Visual-SLAM. Visual-SLAM presents a promising alternative by harnessing visual information captured through cameras for localization and mapping purposes. Unlike GPS, visual-based approaches are not reliant on external signals. They can thus operate effectively in GPS-denied environments, making them particularly suited for indoor navigation, underground exploration, and autonomous vehicles navigating urban canyons [1]. The versatility of Visual-SLAM extends beyond robotics; it finds applications in augmented reality, virtual reality, and indoor positioning systems. The proposed project aims to develop a modular Graphical User Interface (GUI) tailored specifically for Visual-SLAM applications. This GUI will facilitate the visualization and analysis of various real-time Visual-SLAM algorithms, providing users with insights into their performance under different conditions. The GUI's modularity will enable easy integration with different Visual-SLAM algorithms and frameworks, fostering collaboration and innovation. Leveraging the capabilities of Gazebo GUI and the Robot Operating System (ROS), the project aims to study a user-friendly interface that simplifies the deployment and evaluation of Visual-SLAM solutions across diverse robotic platforms and simulation environments. Through this initiative, the project aims to accelerate research and development in Visual-SLAM, paving the way for enhanced navigation capabilities in robotics, augmented reality applications, and beyond
author2 Lam Siew Kei
author_facet Lam Siew Kei
Ng, Zheng Jie
format Final Year Project
author Ng, Zheng Jie
author_sort Ng, Zheng Jie
title Simulator for autonomous robot navigation
title_short Simulator for autonomous robot navigation
title_full Simulator for autonomous robot navigation
title_fullStr Simulator for autonomous robot navigation
title_full_unstemmed Simulator for autonomous robot navigation
title_sort simulator for autonomous robot navigation
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/175137
_version_ 1814047362597781504