Learn to navigate through deep neural networks
Autonomous navigation is a crucial prerequisite for mobile robots to perform various tasks while it remains a great challenge due to its inherent complexity. This thesis deals with the autonomous navigation problem using deep neural networks. It presents four main parts, i.e., an imitation learning...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/139680 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-139680 |
---|---|
record_format |
dspace |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Electrical and electronic engineering |
spellingShingle |
Engineering::Electrical and electronic engineering Wu, Keyu Learn to navigate through deep neural networks |
description |
Autonomous navigation is a crucial prerequisite for mobile robots to perform various tasks while it remains a great challenge due to its inherent complexity. This thesis deals with the autonomous navigation problem using deep neural networks. It presents four main parts, i.e., an imitation learning based path planning algorithm, an imitation learning based online path planning method, a deep reinforcement learning based autonomous steering method, and a deep reinforcement learning based autonomous navigation method.
Firstly, as the basis of navigation, path planning has been extensively studied for decades. The computational time of most existing methods depends on environmental conditions, which leads to the compromise between time efficiency and path quality. To address this challenge, a novel end-to-end deep neural network architecture is proposed to learn 3D path planning policies. By embedding the action decomposition and composition concept, the proposed network is capable of generating actions in 3D space merely through 2D convolutional neural networks and exhibits high generalization capability. Moreover, its computational time for each action prediction is almost independent of environmental scale and complexity.
Furthermore, a deep neural network based online path planning method is also proposed. Firstly, an end-to-end network architecture is designed to learn 3D local path planning policies. In addition, a corresponding path planning framework is also developed to achieve real-time online path planning in unknown environments. In the framework, actions are determined efficiently based on the agent's current location, surrounding obstacles and target position. In addition, the efficacy of the planner is further improved through switching among multiple networks considering different environmental ranges. And meanwhile, line-of-sight checks are also performed to optimize the path quality. Without any prior knowledge of the environment, the proposed deep neural network based online planner is competent to generate near-optimal paths in various unknown cluttered environments. Moreover, its computational time and effectiveness are both independent of environmental scale and complexity, which demonstrates its superiority in large-scale complex environments.
Compared to dealing with the path planning problem separately, it is more superior to achieve autonomous navigation via direct mapping of raw sensor data to control commands. In addition, it is also more desirable to learn from past experiences automatically so as to enhance generalization capability in coping with unseen circumstances. Therefore, a deep reinforcement learning algorithm is proposed to achieve autonomous steering in complex environments. The developed model is capable of deriving steering commands from raw depth images in an end-to-end manner. By embedding a branching noisy dueling architecture, the proposed DRL algorithm can learn the autonomous steering policy more effectively while enabling the simultaneous determination of both linear and angular velocities. In addition, a two-stream feature extractor is introduced to improve depth feature extraction through considering temporal variations explicitly. Moreover, a new action selection strategy is also proposed to achieve motion filtering by taking the consistency of angular velocity into account. It is worth noting that the developed model is readily transferable from a simple virtual training environment to various complicated real-world deployments without any fine-tuning.
Furthermore, to address the more challenging goal-directed autonomous navigation problem, a novel deep reinforcement learning algorithm and a tailored network architecture are therefore proposed. The developed model can output control commands directly from raw depth images and encoded destination information, so that the robot can reach the goal positions while avoiding bumping into obstacles. Firstly, a new depth feature extractor is introduced to acquire critical spatiotemporal features from raw depth images. In addition, a double-source scheme is presented to provide more comprehensive learning samples based on a switching criterion. Moreover, a dual network architecture is proposed, which trains two networks associated with different tasks simultaneously. Specifically, the primary network is employed to learn the navigation policy while the auxiliary network is used to learn the depth feature extractor. It is noteworthy that after merely trained in a simple virtual environment, the developed model is readily deployable to a variety of complex real-world scenarios without any fine-tuning.
In summary, this thesis addresses autonomous navigation problems through deep neural networks. Both virtual and real-world experiments have demonstrated the effectiveness and superiority of the proposed methods. |
author2 |
Wang Han |
author_facet |
Wang Han Wu, Keyu |
format |
Thesis-Doctor of Philosophy |
author |
Wu, Keyu |
author_sort |
Wu, Keyu |
title |
Learn to navigate through deep neural networks |
title_short |
Learn to navigate through deep neural networks |
title_full |
Learn to navigate through deep neural networks |
title_fullStr |
Learn to navigate through deep neural networks |
title_full_unstemmed |
Learn to navigate through deep neural networks |
title_sort |
learn to navigate through deep neural networks |
publisher |
Nanyang Technological University |
publishDate |
2020 |
url |
https://hdl.handle.net/10356/139680 |
_version_ |
1772828489905340416 |
spelling |
sg-ntu-dr.10356-1396802023-07-04T17:43:50Z Learn to navigate through deep neural networks Wu, Keyu Wang Han School of Electrical and Electronic Engineering HW@ntu.edu.sg Engineering::Electrical and electronic engineering Autonomous navigation is a crucial prerequisite for mobile robots to perform various tasks while it remains a great challenge due to its inherent complexity. This thesis deals with the autonomous navigation problem using deep neural networks. It presents four main parts, i.e., an imitation learning based path planning algorithm, an imitation learning based online path planning method, a deep reinforcement learning based autonomous steering method, and a deep reinforcement learning based autonomous navigation method. Firstly, as the basis of navigation, path planning has been extensively studied for decades. The computational time of most existing methods depends on environmental conditions, which leads to the compromise between time efficiency and path quality. To address this challenge, a novel end-to-end deep neural network architecture is proposed to learn 3D path planning policies. By embedding the action decomposition and composition concept, the proposed network is capable of generating actions in 3D space merely through 2D convolutional neural networks and exhibits high generalization capability. Moreover, its computational time for each action prediction is almost independent of environmental scale and complexity. Furthermore, a deep neural network based online path planning method is also proposed. Firstly, an end-to-end network architecture is designed to learn 3D local path planning policies. In addition, a corresponding path planning framework is also developed to achieve real-time online path planning in unknown environments. In the framework, actions are determined efficiently based on the agent's current location, surrounding obstacles and target position. In addition, the efficacy of the planner is further improved through switching among multiple networks considering different environmental ranges. And meanwhile, line-of-sight checks are also performed to optimize the path quality. Without any prior knowledge of the environment, the proposed deep neural network based online planner is competent to generate near-optimal paths in various unknown cluttered environments. Moreover, its computational time and effectiveness are both independent of environmental scale and complexity, which demonstrates its superiority in large-scale complex environments. Compared to dealing with the path planning problem separately, it is more superior to achieve autonomous navigation via direct mapping of raw sensor data to control commands. In addition, it is also more desirable to learn from past experiences automatically so as to enhance generalization capability in coping with unseen circumstances. Therefore, a deep reinforcement learning algorithm is proposed to achieve autonomous steering in complex environments. The developed model is capable of deriving steering commands from raw depth images in an end-to-end manner. By embedding a branching noisy dueling architecture, the proposed DRL algorithm can learn the autonomous steering policy more effectively while enabling the simultaneous determination of both linear and angular velocities. In addition, a two-stream feature extractor is introduced to improve depth feature extraction through considering temporal variations explicitly. Moreover, a new action selection strategy is also proposed to achieve motion filtering by taking the consistency of angular velocity into account. It is worth noting that the developed model is readily transferable from a simple virtual training environment to various complicated real-world deployments without any fine-tuning. Furthermore, to address the more challenging goal-directed autonomous navigation problem, a novel deep reinforcement learning algorithm and a tailored network architecture are therefore proposed. The developed model can output control commands directly from raw depth images and encoded destination information, so that the robot can reach the goal positions while avoiding bumping into obstacles. Firstly, a new depth feature extractor is introduced to acquire critical spatiotemporal features from raw depth images. In addition, a double-source scheme is presented to provide more comprehensive learning samples based on a switching criterion. Moreover, a dual network architecture is proposed, which trains two networks associated with different tasks simultaneously. Specifically, the primary network is employed to learn the navigation policy while the auxiliary network is used to learn the depth feature extractor. It is noteworthy that after merely trained in a simple virtual environment, the developed model is readily deployable to a variety of complex real-world scenarios without any fine-tuning. In summary, this thesis addresses autonomous navigation problems through deep neural networks. Both virtual and real-world experiments have demonstrated the effectiveness and superiority of the proposed methods. Doctor of Philosophy 2020-05-21T02:11:52Z 2020-05-21T02:11:52Z 2020 Thesis-Doctor of Philosophy Wu, K. (2020). Learn to navigate through deep neural networks. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/139680 10.32657/10356/139680 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University |