Edge-server visual localization for autonomous agents
Simultaneous Localization and Mapping (SLAM) of an unfamiliar environment is an important component of mobile autonomous robots and navigation systems. The autonomous agent must be able to locate themselves effectively in both indoor and outdoor situations. However, indoor SLAM cannot rely on Global...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/153291 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-153291 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1532912021-11-16T08:39:16Z Edge-server visual localization for autonomous agents Tay, Kee Kong Lam Siew Kei School of Computer Science and Engineering ASSKLam@ntu.edu.sg Engineering::Computer science and engineering::Computer systems organization::Special-purpose and application-based systems Simultaneous Localization and Mapping (SLAM) of an unfamiliar environment is an important component of mobile autonomous robots and navigation systems. The autonomous agent must be able to locate themselves effectively in both indoor and outdoor situations. However, indoor SLAM cannot rely on Global Positional System (GPS), and existing methods frequently assume static scene circumstances that are unlikely to occur in real-world scenarios. Furthermore, the maps created by traditional SLAM systems have no semantic significance of the environment, where the latter is useful for higher-level missions like path navigation or object classification/detection. The computation cost is another important consideration as SLAM systems are often implemented on embedded platforms. The goal of this project is to develop an edge-server collaborative visual-inertial semantic localization system. A computer vision-based positioning system (visual-inertial SLAM system) is designed on the Jetson Xavier NX embedded platform that can work in dynamic environments and generate context-aware sparse 3D semantic segmentation. The SLAM system on the Jetson Xavier NX communicates the computed locations to a server, which will produce a global map. The SLAM system also obtains inertial information from an Inertial Measurement Unit (IMU) sensor to reduce uncertainty in pose estimation, hence enabling robust pose estimation. Finally, the SLAM system relies on a deep learning module to obtain semantic information for building global semantic maps. Bachelor of Engineering (Computer Engineering) 2021-11-16T04:10:44Z 2021-11-16T04:10:44Z 2021 Final Year Project (FYP) Tay, K. K. (2021). Edge-server visual localization for autonomous agents. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/153291 https://hdl.handle.net/10356/153291 en SCSE20-0742 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computer systems organization::Special-purpose and application-based systems |
spellingShingle |
Engineering::Computer science and engineering::Computer systems organization::Special-purpose and application-based systems Tay, Kee Kong Edge-server visual localization for autonomous agents |
description |
Simultaneous Localization and Mapping (SLAM) of an unfamiliar environment is an important component of mobile autonomous robots and navigation systems. The autonomous agent must be able to locate themselves effectively in both indoor and outdoor situations. However, indoor SLAM cannot rely on Global Positional System (GPS), and existing methods frequently assume static scene circumstances that are unlikely to occur in real-world scenarios. Furthermore, the maps created by traditional SLAM systems have no semantic significance of the environment, where the latter is useful for higher-level missions like path navigation or object classification/detection. The computation cost is another important consideration as SLAM systems are often implemented on embedded platforms. The goal of this project is to develop an edge-server collaborative visual-inertial semantic localization system. A computer vision-based positioning system (visual-inertial SLAM system) is designed on the Jetson Xavier NX embedded platform that can work in dynamic environments and generate context-aware sparse 3D semantic segmentation. The SLAM system on the Jetson Xavier NX communicates the computed locations to a server, which will produce a global map. The SLAM system also obtains inertial information from an Inertial Measurement Unit (IMU) sensor to reduce uncertainty in pose estimation, hence enabling robust pose estimation. Finally, the SLAM system relies on a deep learning module to obtain semantic information for building global semantic maps. |
author2 |
Lam Siew Kei |
author_facet |
Lam Siew Kei Tay, Kee Kong |
format |
Final Year Project |
author |
Tay, Kee Kong |
author_sort |
Tay, Kee Kong |
title |
Edge-server visual localization for autonomous agents |
title_short |
Edge-server visual localization for autonomous agents |
title_full |
Edge-server visual localization for autonomous agents |
title_fullStr |
Edge-server visual localization for autonomous agents |
title_full_unstemmed |
Edge-server visual localization for autonomous agents |
title_sort |
edge-server visual localization for autonomous agents |
publisher |
Nanyang Technological University |
publishDate |
2021 |
url |
https://hdl.handle.net/10356/153291 |
_version_ |
1718368073314992128 |