Edge-server visual localization for autonomous agents
Simultaneous Localization and Mapping (SLAM) of an unfamiliar environment is an important component of mobile autonomous robots and navigation systems. The autonomous agent must be able to locate themselves effectively in both indoor and outdoor situations. However, indoor SLAM cannot rely on Global...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/153291 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Simultaneous Localization and Mapping (SLAM) of an unfamiliar environment is an important component of mobile autonomous robots and navigation systems. The autonomous agent must be able to locate themselves effectively in both indoor and outdoor situations. However, indoor SLAM cannot rely on Global Positional System (GPS), and existing methods frequently assume static scene circumstances that are unlikely to occur in real-world scenarios. Furthermore, the maps created by traditional SLAM systems have no semantic significance of the environment, where the latter is useful for higher-level missions like path navigation or object classification/detection. The computation cost is another important consideration as SLAM systems are often implemented on embedded platforms. The goal of this project is to develop an edge-server collaborative visual-inertial semantic localization system. A computer vision-based positioning system (visual-inertial SLAM system) is designed on the Jetson Xavier NX embedded platform that can work in dynamic environments and generate context-aware sparse 3D semantic segmentation. The SLAM system on the Jetson Xavier NX communicates the computed locations to a server, which will produce a global map. The SLAM system also obtains inertial information from an Inertial Measurement Unit (IMU) sensor to reduce uncertainty in pose estimation, hence enabling robust pose estimation. Finally, the SLAM system relies on a deep learning module to obtain semantic information for building global semantic maps. |
---|