Autonomous unmanned vehicle in indoor environment

The autonomous vehicle or unmanned ground vehicle more specifically to be implemented should be able to map an unknown environment and later navigate from an input source location to a destination location with obstacle avoidance ability. SLAM (Simultaneous Localization and Mapping) is the core t...

Full description

Saved in:
Bibliographic Details
Main Author: Xiong, Zhiwei
Other Authors: Guan Yong Liang
Format: Final Year Project
Language:English
Published: 2019
Subjects:
Online Access:http://hdl.handle.net/10356/77345
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-77345
record_format dspace
spelling sg-ntu-dr.10356-773452023-07-07T17:25:58Z Autonomous unmanned vehicle in indoor environment Xiong, Zhiwei Guan Yong Liang School of Electrical and Electronic Engineering Arete M Pte. Ltd. DRNTU::Engineering::Electrical and electronic engineering The autonomous vehicle or unmanned ground vehicle more specifically to be implemented should be able to map an unknown environment and later navigate from an input source location to a destination location with obstacle avoidance ability. SLAM (Simultaneous Localization and Mapping) is the core technique adopted, which builds a map of its working environment and then localize itself in it. It estimates odometry by tracking features in the environment and increments the map simultaneously. By detecting a loop closure, previously visited places can be used to reduce map errors. With a mix of different proprioceptive, the mapping estimation would be more robust.[1][2] However, due to cost consideration, only one RGB-D camera was applied in this project. Visual odometry estimation was used at the cost of lost odometry. Since the sensor chosen was one RGB-D camera, it contained depth information of the surrounding as well. The depth information could be aligned with the color frames such that each pixel of the color frame would have a corresponding depth value. Taking advantage of this feature, a simple obstacle avoidance algorithm was introduced. Once the map of the environment was constructed and saved, the coordinate of the destination point could be set based on it. By comparing the current view with the map previously generated, the UGV could realize its current location inside the map. A routing strategy would then be introduced to navigate the rover with the avoidance algorithm previously introduced. Bachelor of Engineering (Electrical and Electronic Engineering) 2019-05-27T07:29:54Z 2019-05-27T07:29:54Z 2019 Final Year Project (FYP) http://hdl.handle.net/10356/77345 en Nanyang Technological University 52 p. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic DRNTU::Engineering::Electrical and electronic engineering
spellingShingle DRNTU::Engineering::Electrical and electronic engineering
Xiong, Zhiwei
Autonomous unmanned vehicle in indoor environment
description The autonomous vehicle or unmanned ground vehicle more specifically to be implemented should be able to map an unknown environment and later navigate from an input source location to a destination location with obstacle avoidance ability. SLAM (Simultaneous Localization and Mapping) is the core technique adopted, which builds a map of its working environment and then localize itself in it. It estimates odometry by tracking features in the environment and increments the map simultaneously. By detecting a loop closure, previously visited places can be used to reduce map errors. With a mix of different proprioceptive, the mapping estimation would be more robust.[1][2] However, due to cost consideration, only one RGB-D camera was applied in this project. Visual odometry estimation was used at the cost of lost odometry. Since the sensor chosen was one RGB-D camera, it contained depth information of the surrounding as well. The depth information could be aligned with the color frames such that each pixel of the color frame would have a corresponding depth value. Taking advantage of this feature, a simple obstacle avoidance algorithm was introduced. Once the map of the environment was constructed and saved, the coordinate of the destination point could be set based on it. By comparing the current view with the map previously generated, the UGV could realize its current location inside the map. A routing strategy would then be introduced to navigate the rover with the avoidance algorithm previously introduced.
author2 Guan Yong Liang
author_facet Guan Yong Liang
Xiong, Zhiwei
format Final Year Project
author Xiong, Zhiwei
author_sort Xiong, Zhiwei
title Autonomous unmanned vehicle in indoor environment
title_short Autonomous unmanned vehicle in indoor environment
title_full Autonomous unmanned vehicle in indoor environment
title_fullStr Autonomous unmanned vehicle in indoor environment
title_full_unstemmed Autonomous unmanned vehicle in indoor environment
title_sort autonomous unmanned vehicle in indoor environment
publishDate 2019
url http://hdl.handle.net/10356/77345
_version_ 1772828998220382208