Indoor autonomous robot navigation by using visually grounded semantic instructions

Navigation is becoming an increasingly substantial part of autonomy in robotics. As the capability of robots is continually researched and developed towards increasing complex tasks, navigation has become a topic that many are diving into. There has been numerous developments in navigation, from usi...

Full description

Saved in:
Bibliographic Details
Main Author: Tan, Mei Yu
Other Authors: Soong Boon Hee
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/158164
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-158164
record_format dspace
spelling sg-ntu-dr.10356-1581642023-07-07T19:24:34Z Indoor autonomous robot navigation by using visually grounded semantic instructions Tan, Mei Yu Soong Boon Hee School of Electrical and Electronic Engineering A*STAR Institute of Material Research and Engineering EBHSOONG@ntu.edu.sg Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics Navigation is becoming an increasingly substantial part of autonomy in robotics. As the capability of robots is continually researched and developed towards increasing complex tasks, navigation has become a topic that many are diving into. There has been numerous developments in navigation, from using Simultaneous Localization and Mapping to build maps in navigation, to exploring other forms of maps like Topological, Semantics and even Abstract maps. This project will work towards exploring a different kind of navigation that is essentially map-less. This is especially useful in foreign environments that the robot have no knowledge on whatsoever. More specifically, this project focuses on defining the navigation goal as a mere string, of either a room name or a unit number. Without a map as a reference, the robot is forced to rely on extracting text information from its surrounding to identify its goal. Although the focus of this project is on goal-oriented navigation using text from the environment, the idea being explored in this project could be applied to many different functions and even used together with other existing work. This study is similar to previous work on extracting directional instructions for the robot to follow by extracting information from signs with the exception that it focus on finding the goal by verifying different potential end points, which are doors in this case, in the environment. Bachelor of Engineering (Electrical and Electronic Engineering) 2022-05-30T07:58:00Z 2022-05-30T07:58:00Z 2022 Final Year Project (FYP) Tan, M. Y. (2022). Indoor autonomous robot navigation by using visually grounded semantic instructions. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/158164 https://hdl.handle.net/10356/158164 en B3204-211 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics
spellingShingle Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics
Tan, Mei Yu
Indoor autonomous robot navigation by using visually grounded semantic instructions
description Navigation is becoming an increasingly substantial part of autonomy in robotics. As the capability of robots is continually researched and developed towards increasing complex tasks, navigation has become a topic that many are diving into. There has been numerous developments in navigation, from using Simultaneous Localization and Mapping to build maps in navigation, to exploring other forms of maps like Topological, Semantics and even Abstract maps. This project will work towards exploring a different kind of navigation that is essentially map-less. This is especially useful in foreign environments that the robot have no knowledge on whatsoever. More specifically, this project focuses on defining the navigation goal as a mere string, of either a room name or a unit number. Without a map as a reference, the robot is forced to rely on extracting text information from its surrounding to identify its goal. Although the focus of this project is on goal-oriented navigation using text from the environment, the idea being explored in this project could be applied to many different functions and even used together with other existing work. This study is similar to previous work on extracting directional instructions for the robot to follow by extracting information from signs with the exception that it focus on finding the goal by verifying different potential end points, which are doors in this case, in the environment.
author2 Soong Boon Hee
author_facet Soong Boon Hee
Tan, Mei Yu
format Final Year Project
author Tan, Mei Yu
author_sort Tan, Mei Yu
title Indoor autonomous robot navigation by using visually grounded semantic instructions
title_short Indoor autonomous robot navigation by using visually grounded semantic instructions
title_full Indoor autonomous robot navigation by using visually grounded semantic instructions
title_fullStr Indoor autonomous robot navigation by using visually grounded semantic instructions
title_full_unstemmed Indoor autonomous robot navigation by using visually grounded semantic instructions
title_sort indoor autonomous robot navigation by using visually grounded semantic instructions
publisher Nanyang Technological University
publishDate 2022
url https://hdl.handle.net/10356/158164
_version_ 1772828783479357440