Traffic signal control for optimized urban mobility

The aim of traffic signal control (TSC) is to optimize vehicle traffic in urban road networks, via the control of traffic lights at intersections. Efficient traffic signal control can significantly reduce the detrimental impacts of traffic congestion, such as environmental pollution, passenger frust...

Full description

Saved in:
Bibliographic Details
Main Author: Damani, Mehul
Other Authors: Domenico Campolo
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/159038
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:The aim of traffic signal control (TSC) is to optimize vehicle traffic in urban road networks, via the control of traffic lights at intersections. Efficient traffic signal control can significantly reduce the detrimental impacts of traffic congestion, such as environmental pollution, passenger frustration and economic losses due to wasted time (e.g., surrounding delivery or emergency vehicles). At present, fixed-time controllers, which use offline data to fix the duration of traffic signal phases, remain the most widespread. However, urban traffic exhibits complex spatio-temporal patterns, such as peak congestion during the end of a workday. Fixed-time controllers, which have a pre-defined control rule, are unable to account for such dynamic patterns and as a result, there has been a recent push for adaptive traffic signal control methods which can dynamically adjust their control rule based on locally-sensed real-time traffic conditions. Reinforcement learning (RL) is one such adaptive and versatile data-driven method which has shown great promise in a variety of decision-making problems. Combined with deep learning, RL can be leveraged to learn powerful control policies for highly complex tasks. This work focuses on decentralized adaptive TSC and proposes a distributed multi-agent reinforcement learning (MARL) framework, where each agent in the system is a traffic intersection tasked to select the traffic phase of that intersection, based on locally-sensed traffic conditions and communication with its neighbors. However, due to the highly connected and interdependent nature between the intersections/agents, cooperation among these intersections is key to achieving the type of bottom-up, network-wide traffic optimization desired. To achieve this, this work proposes a novel social intrinsic reward mechanism to learn locally-cooperative traffic signal control policies. Counterfactually-predicted states, obtained using a learned dynamics model, are used to compute an intrinsic reward that captures the impact an agent's immediate actions has on its neighbouring agents's future state, thus encouraging locally-selfless behaviors. In contrast to simply sharing rewards among neighbors, which usually results in increased reward noise, our proposed intrinsic reward allows agents to explicitly assign credit to each other, leading to more stable, faster convergence to enhanced-cooperation policies. We present extensive comparisons results against state-of-the-art methods on the Manhattan 5x5 traffic network using the standard traffic simulator, SUMO. There, our results show that our proposed framework exhibits comparable or improved performance over state-of-the-art TSC baselines.