OPTIMIZED URBAN TRAFFIC CONTROL WITH ADAPTIVE EXPONENTIAL REWARD DEEP Q NETWORK AT INTERSECTION USING PARTICLE SWARM OPTIMIZATION

The excessive number of vehicles on a road network causes congestion. Dynamic traffic conditions result in the need for a traffic control system that can adapt to these conditions. Indonesia is actively developing an Artificial Intelligence-based traffic control system. A Reinforcement Learning-base...

Full description

Saved in:
Bibliographic Details
Main Author: Aditya Rahman, Muhammad
Format: Final Project
Language:Indonesia
Subjects:
Online Access:https://digilib.itb.ac.id/gdl/view/75412
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Institut Teknologi Bandung
Language: Indonesia
Description
Summary:The excessive number of vehicles on a road network causes congestion. Dynamic traffic conditions result in the need for a traffic control system that can adapt to these conditions. Indonesia is actively developing an Artificial Intelligence-based traffic control system. A Reinforcement Learning-based traffic control, namely Deep Q Network, has been developed where the traffic phase is determined by reward with variations in load pressure and queue length. However, these variations may not produce an optimal reward value on the flow and density of vehicles, therefore optimization of the reward load value is required. In this research, an adaptive reward load variation controller using Particle Swarm Optimization algorithm is introduced. The resulting load variation has better performance than the Deep Q Network algorithm with a maximum vehicle flow value of 220 vehicles per hour and a maximum vehicle density value of 27 vehicles per kilometer compared to the Deep Q Network algorithm which has a maximum vehicle flow value of 213 vehicles per hour and a maximum vehicle density value of 33 vehicles per kilometer. This value will increase the productivity of an area.