Simulator for autonomous robot navigation
Simultaneous Localization and Mapping (SLAM), a fundamental aspect of robotics and autonomous navigation systems, is comprised of two essential components localization and mapping. Localization involves the ability to navigate and determine the position of a robot or device within an unfamilia...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175137 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Simultaneous Localization and Mapping (SLAM), a fundamental aspect of robotics and
autonomous navigation systems, is comprised of two essential components localization and
mapping. Localization involves the ability to navigate and determine the position of a robot or
device within an unfamiliar environment, while mapping pertains to the creation and
maintenance of a representation of the environment.
Currently, the prevailing method for localization heavily relies on Global Positioning System
(GPS) sensors. However, the effectiveness of GPS is often constrained to scenarios with a clear
view of the sky, and it introduces significant errors when used for indoor navigation,
underground exploration, or in densely built urban areas with tall buildings [1].
This limitation has spurred the exploration of alternative solutions such as Visual-SLAM.
Visual-SLAM presents a promising alternative by harnessing visual information captured
through cameras for localization and mapping purposes. Unlike GPS, visual-based approaches
are not reliant on external signals. They can thus operate effectively in GPS-denied environments,
making them particularly suited for indoor navigation, underground exploration, and
autonomous vehicles navigating urban canyons [1]. The versatility of Visual-SLAM extends
beyond robotics; it finds applications in augmented reality, virtual reality, and indoor
positioning systems.
The proposed project aims to develop a modular Graphical User Interface (GUI) tailored
specifically for Visual-SLAM applications. This GUI will facilitate the visualization and
analysis of various real-time Visual-SLAM algorithms, providing users with insights into their
performance under different conditions. The GUI's modularity will enable easy integration with
different Visual-SLAM algorithms and frameworks, fostering collaboration and innovation. Leveraging the capabilities of Gazebo GUI and the Robot Operating System (ROS), the project aims to study a user-friendly interface that simplifies the deployment and evaluation of Visual-SLAM solutions across diverse robotic platforms and simulation environments. Through this initiative, the project aims to accelerate research and development in Visual-SLAM, paving the way for enhanced navigation capabilities in robotics, augmented reality applications, and beyond |
---|