Server-edge visual localization system for autonomous agents
SLAM algorithms are commonly used to generate a map that can subsequently be used in the field of autonomous robot navigation and obstacle avoidance, with robots simultaneously mapping the environment around itself and localising itself in that environment. Since SLAM algorithms make use of approxim...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/162903 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | SLAM algorithms are commonly used to generate a map that can subsequently be used in the field of autonomous robot navigation and obstacle avoidance, with robots simultaneously mapping the environment around itself and localising itself in that environment. Since SLAM algorithms make use of approximate solutions and are commonly executed on embedded platforms in real-time environments, it is crucial for these SLAM algorithms to be accurate yet efficient.
One way of increasing the efficiency of SLAM algorithms is through collaborative SLAM which allows multiple agents to participate in the mapping and localization process concurrently. Ensuring that collaborative SLAM algorithms can run properly with real-world considerations, such as when agents connect halfway or when agents with different camera characteristics and movements are used would mean that these algorithms can be used in a larger variety of applications. In this project, we tested COVINS, a collaborative SLAM framework, with up to two distinct types of agents running visual-inertial odometry through ORB-SLAM3.
Specifically, a Tello EDU drone and an Intel RealSense Depth Camera D435i were used as the agents. Calibration was done before running the framework in the Hardware & Embedded Systems Lab (HESL) in NTU. Both on the fly connection and concurrent connection to the framework were tested, with trajectory estimates of each agent and covisibility edges between keyframes of the agents obtained. It was found that on the fly connections were well-supported, while agents need to first work well with visual inertial odometry to integrate properly with the COVINS framework. |
---|