Hybrid SLAM and object recognition on an embedded platform
Simultaneous Localization and Mapping (SLAM) is a technique employed in the field of robotics to allow mobile robots to navigate an unfamiliar environment. Visual SLAM is a subset of SLAM which uses a camera as the primary sensor to give mobile robots the illusion of vision. Traditionally, Visual SL...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/163416 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Simultaneous Localization and Mapping (SLAM) is a technique employed in the field of robotics to allow mobile robots to navigate an unfamiliar environment. Visual SLAM is a subset of SLAM which uses a camera as the primary sensor to give mobile robots the illusion of vision. Traditionally, Visual SLAM uses images from the camera to only perform SLAM. We propose the addition of an Object Recognition subsystem which utilizes the same images being processed for Visual SLAM, while supplementing it with additional information. This project proposes the development of a Hybrid SLAM and Object Recognition system which has the capability to augment existing SLAM applications with the contextual information gathered by Object Recognition techniques. The hybrid system is developed on the Jetson Xavier NX embedded system, with the Stereolabs ZED2 Stereo AI Camera providing a live video feed. The backbone of the system is the ORB-SLAM3 Visual SLAM algorithm as it is one of the most recognized and competent Visual SLAM algorithms in the present day. The Object Recognition component is handled by a Deep Learning YOLO-based model which provides fast performance for real-time detection. |
---|