Hybrid SLAM and object recognition on an embedded platform

Simultaneous Localization and Mapping (SLAM) is a technique employed in the field of robotics to allow mobile robots to navigate an unfamiliar environment. Visual SLAM is a subset of SLAM which uses a camera as the primary sensor to give mobile robots the illusion of vision. Traditionally, Visual SL...

全面介紹

Saved in:
書目詳細資料
主要作者: Chan, Jaryl Jia Le
其他作者: Lam Siew Kei
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2022
主題:
在線閱讀:https://hdl.handle.net/10356/163416
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Simultaneous Localization and Mapping (SLAM) is a technique employed in the field of robotics to allow mobile robots to navigate an unfamiliar environment. Visual SLAM is a subset of SLAM which uses a camera as the primary sensor to give mobile robots the illusion of vision. Traditionally, Visual SLAM uses images from the camera to only perform SLAM. We propose the addition of an Object Recognition subsystem which utilizes the same images being processed for Visual SLAM, while supplementing it with additional information. This project proposes the development of a Hybrid SLAM and Object Recognition system which has the capability to augment existing SLAM applications with the contextual information gathered by Object Recognition techniques. The hybrid system is developed on the Jetson Xavier NX embedded system, with the Stereolabs ZED2 Stereo AI Camera providing a live video feed. The backbone of the system is the ORB-SLAM3 Visual SLAM algorithm as it is one of the most recognized and competent Visual SLAM algorithms in the present day. The Object Recognition component is handled by a Deep Learning YOLO-based model which provides fast performance for real-time detection.