Low–cost visual localization system on an embedded platform
Simultaneous Localization and Mapping (SLAM) of an unknown environment is very crucial in navigation for autonomous robots. The role of SLAM is more critical for indoor navigation as the SLAM system cannot rely on Global Positioning System (GPS). Furthermore, an autonomous robot is typically on...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/157237 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Simultaneous Localization and Mapping (SLAM) of an unknown environment is very crucial
in navigation for autonomous robots. The role of SLAM is more critical for indoor navigation
as the SLAM system cannot rely on Global Positioning System (GPS). Furthermore, an
autonomous robot is typically only equipped with camera sensors and a low-cost embedded
system, which poses a challenge in achieving high-speed visual SLAM. The objective of this
project is to develop a preliminary Field Programmable Gate Array (FPGA) based sensing and
computing stack for visual SLAM. To date, most of the existing visual SLAM algorithms have
been implemented on microprocessors and GPUs. This project aims to port a widely used
visual-inertial SLAM framework to the processing system (PS) of an FPGA platform and
perform calibration of the Inertial Measurement Unit (IMU) and camera sensors to reduce pose
estimate uncertainty. This project will lay the foundation for future research in hardware
acceleration of visual SLAM algorithm on FPGA |
---|