Learning-aided visual inertial odometry for mobile robots
This research presents a novel approach to visual-inertial odometry (VIO) for challenging environments based on VINS-Fusion. The proposed method utilizes a deep learning technique to enhance the performance of the state estimation. The proposed approach employs semantic segmentation to highlight...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/167209 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-167209 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1672092023-07-07T15:44:10Z Learning-aided visual inertial odometry for mobile robots Heng, Yu Xi Xie Lihua School of Electrical and Electronic Engineering ELHXIE@ntu.edu.sg Engineering::Electrical and electronic engineering This research presents a novel approach to visual-inertial odometry (VIO) for challenging environments based on VINS-Fusion. The proposed method utilizes a deep learning technique to enhance the performance of the state estimation. The proposed approach employs semantic segmentation to highlight ground features such as lane markings and ground bricks. The exper- iments’ results demonstrate the proposed method’s effectiveness in improving the robustness and accuracy of the VIO system in semi-outdoor environments with dynamic objects. The re- port concludes with a summary of the main findings and recommendations for future research. This research has the potential to enhance the capabilities of autonomous systems in indoor environments, such as in factories, hospitals, and shopping centers. Bachelor of Engineering (Electrical and Electronic Engineering) 2023-05-24T12:31:07Z 2023-05-24T12:31:07Z 2023 Final Year Project (FYP) Heng, Y. X. (2023). Learning-aided visual inertial odometry for mobile robots. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/167209 https://hdl.handle.net/10356/167209 en application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Electrical and electronic engineering |
spellingShingle |
Engineering::Electrical and electronic engineering Heng, Yu Xi Learning-aided visual inertial odometry for mobile robots |
description |
This research presents a novel approach to visual-inertial odometry (VIO) for challenging
environments based on VINS-Fusion. The proposed method utilizes a deep learning technique
to enhance the performance of the state estimation. The proposed approach employs semantic
segmentation to highlight ground features such as lane markings and ground bricks. The exper-
iments’ results demonstrate the proposed method’s effectiveness in improving the robustness
and accuracy of the VIO system in semi-outdoor environments with dynamic objects. The re-
port concludes with a summary of the main findings and recommendations for future research.
This research has the potential to enhance the capabilities of autonomous systems in indoor
environments, such as in factories, hospitals, and shopping centers. |
author2 |
Xie Lihua |
author_facet |
Xie Lihua Heng, Yu Xi |
format |
Final Year Project |
author |
Heng, Yu Xi |
author_sort |
Heng, Yu Xi |
title |
Learning-aided visual inertial odometry for mobile robots |
title_short |
Learning-aided visual inertial odometry for mobile robots |
title_full |
Learning-aided visual inertial odometry for mobile robots |
title_fullStr |
Learning-aided visual inertial odometry for mobile robots |
title_full_unstemmed |
Learning-aided visual inertial odometry for mobile robots |
title_sort |
learning-aided visual inertial odometry for mobile robots |
publisher |
Nanyang Technological University |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/167209 |
_version_ |
1772826359846928384 |