Object-oriented indoor navigation for delivery robot

The navigation of object-oriented delivery robot is widely used in daily life. This thesis focuses on the popular VLN tasks in recent years to solve the problem of indoor delivery navigation in unseen environments. In the research of VLN, the cross model interaction between vision and language has m...

全面介紹

Saved in:
書目詳細資料
主要作者: Li, Yuanwei
其他作者: Wang Dan Wei
格式: Thesis-Master by Coursework
語言:English
出版: Nanyang Technological University 2024
主題:
在線閱讀:https://hdl.handle.net/10356/173302
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
id sg-ntu-dr.10356-173302
record_format dspace
spelling sg-ntu-dr.10356-1733022024-01-26T15:42:34Z Object-oriented indoor navigation for delivery robot Li, Yuanwei Wang Dan Wei School of Electrical and Electronic Engineering EDWWANG@ntu.edu.sg Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics The navigation of object-oriented delivery robot is widely used in daily life. This thesis focuses on the popular VLN tasks in recent years to solve the problem of indoor delivery navigation in unseen environments. In the research of VLN, the cross model interaction between vision and language has made significant progress in the past two years with the rapid development of CV and NLP. The emergence of BERT models also help in training and construct ing navigation frameworks. Although the BERT model has good performance in VLN, the mismatch between instructions and visual information at the input leads to navigation errors for robots in similar scenes. This thesis introduces a cross model interaction transformer to solve the mismatch between instruction and visual information to optimize the input of the BERT model and improve the navigation success rate of the delivery robot. Master's degree 2024-01-24T00:46:23Z 2024-01-24T00:46:23Z 2023 Thesis-Master by Coursework Li, Y. (2023). Object-oriented indoor navigation for delivery robot. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/173302 https://hdl.handle.net/10356/173302 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics
spellingShingle Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics
Li, Yuanwei
Object-oriented indoor navigation for delivery robot
description The navigation of object-oriented delivery robot is widely used in daily life. This thesis focuses on the popular VLN tasks in recent years to solve the problem of indoor delivery navigation in unseen environments. In the research of VLN, the cross model interaction between vision and language has made significant progress in the past two years with the rapid development of CV and NLP. The emergence of BERT models also help in training and construct ing navigation frameworks. Although the BERT model has good performance in VLN, the mismatch between instructions and visual information at the input leads to navigation errors for robots in similar scenes. This thesis introduces a cross model interaction transformer to solve the mismatch between instruction and visual information to optimize the input of the BERT model and improve the navigation success rate of the delivery robot.
author2 Wang Dan Wei
author_facet Wang Dan Wei
Li, Yuanwei
format Thesis-Master by Coursework
author Li, Yuanwei
author_sort Li, Yuanwei
title Object-oriented indoor navigation for delivery robot
title_short Object-oriented indoor navigation for delivery robot
title_full Object-oriented indoor navigation for delivery robot
title_fullStr Object-oriented indoor navigation for delivery robot
title_full_unstemmed Object-oriented indoor navigation for delivery robot
title_sort object-oriented indoor navigation for delivery robot
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/173302
_version_ 1789483225210421248