Dynamic route guidance arithmetic based on deep reinforcement learning

Routing navigation is an essential part of the transportation management field’s decision-making topic. There are many routing algorithms introduced to solve the routing planning problem and aim to narrow down the traveling time of the vehicles. However, these classical algorithms perform well in...

Full description

Saved in:
Bibliographic Details
Main Author: Jiang, Zhichao
Other Authors: Su Rong
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/159264
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Routing navigation is an essential part of the transportation management field’s decision-making topic. There are many routing algorithms introduced to solve the routing planning problem and aim to narrow down the traveling time of the vehicles. However, these classical algorithms perform well in the static traffic network instead of the real-time traffic conditions. In order to address this issue, this project proposes an approach based on reinforcement learning(RL) to handle the dynamic traffic network, which means this approach can be self-adaptive in uncertain traffic conditions. The RL-based framework of this project is mainly based on the deep Q-network(DQN), which controls the vehicles to make the decision at the intersection and guides the vehicles to the destination in the optimal route. The traffic data is used to train the RL agent to make decisions collected from the SUMO traffic network simulator. Finally, the performance has been further validated through the Friedman test. The comparisons between the classical and RL-based algorithms show that the approach’s validation and the latter perform better in avoiding traffic congestion, achieving less traveling time in complex traffic networks.