Server-edge visual localization system for autonomous agents

SLAM algorithms are commonly used to generate a map that can subsequently be used in the field of autonomous robot navigation and obstacle avoidance, with robots simultaneously mapping the environment around itself and localising itself in that environment. Since SLAM algorithms make use of approxim...

Full description

Saved in:
Bibliographic Details
Main Author: Chong, Shen Rui
Other Authors: Lam Siew Kei
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/162903
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-162903
record_format dspace
spelling sg-ntu-dr.10356-1629032022-11-14T01:34:45Z Server-edge visual localization system for autonomous agents Chong, Shen Rui Lam Siew Kei School of Computer Science and Engineering ASSKLam@ntu.edu.sg Engineering::Computer science and engineering SLAM algorithms are commonly used to generate a map that can subsequently be used in the field of autonomous robot navigation and obstacle avoidance, with robots simultaneously mapping the environment around itself and localising itself in that environment. Since SLAM algorithms make use of approximate solutions and are commonly executed on embedded platforms in real-time environments, it is crucial for these SLAM algorithms to be accurate yet efficient. One way of increasing the efficiency of SLAM algorithms is through collaborative SLAM which allows multiple agents to participate in the mapping and localization process concurrently. Ensuring that collaborative SLAM algorithms can run properly with real-world considerations, such as when agents connect halfway or when agents with different camera characteristics and movements are used would mean that these algorithms can be used in a larger variety of applications. In this project, we tested COVINS, a collaborative SLAM framework, with up to two distinct types of agents running visual-inertial odometry through ORB-SLAM3. Specifically, a Tello EDU drone and an Intel RealSense Depth Camera D435i were used as the agents. Calibration was done before running the framework in the Hardware & Embedded Systems Lab (HESL) in NTU. Both on the fly connection and concurrent connection to the framework were tested, with trajectory estimates of each agent and covisibility edges between keyframes of the agents obtained. It was found that on the fly connections were well-supported, while agents need to first work well with visual inertial odometry to integrate properly with the COVINS framework. Bachelor of Engineering (Computer Science) 2022-11-14T01:34:45Z 2022-11-14T01:34:45Z 2022 Final Year Project (FYP) Chong, S. R. (2022). Server-edge visual localization system for autonomous agents. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/162903 https://hdl.handle.net/10356/162903 en SCSE21-0703 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
spellingShingle Engineering::Computer science and engineering
Chong, Shen Rui
Server-edge visual localization system for autonomous agents
description SLAM algorithms are commonly used to generate a map that can subsequently be used in the field of autonomous robot navigation and obstacle avoidance, with robots simultaneously mapping the environment around itself and localising itself in that environment. Since SLAM algorithms make use of approximate solutions and are commonly executed on embedded platforms in real-time environments, it is crucial for these SLAM algorithms to be accurate yet efficient. One way of increasing the efficiency of SLAM algorithms is through collaborative SLAM which allows multiple agents to participate in the mapping and localization process concurrently. Ensuring that collaborative SLAM algorithms can run properly with real-world considerations, such as when agents connect halfway or when agents with different camera characteristics and movements are used would mean that these algorithms can be used in a larger variety of applications. In this project, we tested COVINS, a collaborative SLAM framework, with up to two distinct types of agents running visual-inertial odometry through ORB-SLAM3. Specifically, a Tello EDU drone and an Intel RealSense Depth Camera D435i were used as the agents. Calibration was done before running the framework in the Hardware & Embedded Systems Lab (HESL) in NTU. Both on the fly connection and concurrent connection to the framework were tested, with trajectory estimates of each agent and covisibility edges between keyframes of the agents obtained. It was found that on the fly connections were well-supported, while agents need to first work well with visual inertial odometry to integrate properly with the COVINS framework.
author2 Lam Siew Kei
author_facet Lam Siew Kei
Chong, Shen Rui
format Final Year Project
author Chong, Shen Rui
author_sort Chong, Shen Rui
title Server-edge visual localization system for autonomous agents
title_short Server-edge visual localization system for autonomous agents
title_full Server-edge visual localization system for autonomous agents
title_fullStr Server-edge visual localization system for autonomous agents
title_full_unstemmed Server-edge visual localization system for autonomous agents
title_sort server-edge visual localization system for autonomous agents
publisher Nanyang Technological University
publishDate 2022
url https://hdl.handle.net/10356/162903
_version_ 1751548510027644928