Traffic signal control for optimized urban mobility

The aim of traffic signal control (TSC) is to optimize vehicle traffic in urban road networks, via the control of traffic lights at intersections. Efficient traffic signal control can significantly reduce the detrimental impacts of traffic congestion, such as environmental pollution, passenger frust...

Full description

Saved in:
Bibliographic Details
Main Author: Damani, Mehul
Other Authors: Domenico Campolo
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/159038
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-159038
record_format dspace
spelling sg-ntu-dr.10356-1590382023-03-04T20:18:47Z Traffic signal control for optimized urban mobility Damani, Mehul Domenico Campolo School of Mechanical and Aerospace Engineering Guillaume Sartoretti d.campolo@ntu.edu.sg Engineering::Mechanical engineering The aim of traffic signal control (TSC) is to optimize vehicle traffic in urban road networks, via the control of traffic lights at intersections. Efficient traffic signal control can significantly reduce the detrimental impacts of traffic congestion, such as environmental pollution, passenger frustration and economic losses due to wasted time (e.g., surrounding delivery or emergency vehicles). At present, fixed-time controllers, which use offline data to fix the duration of traffic signal phases, remain the most widespread. However, urban traffic exhibits complex spatio-temporal patterns, such as peak congestion during the end of a workday. Fixed-time controllers, which have a pre-defined control rule, are unable to account for such dynamic patterns and as a result, there has been a recent push for adaptive traffic signal control methods which can dynamically adjust their control rule based on locally-sensed real-time traffic conditions. Reinforcement learning (RL) is one such adaptive and versatile data-driven method which has shown great promise in a variety of decision-making problems. Combined with deep learning, RL can be leveraged to learn powerful control policies for highly complex tasks. This work focuses on decentralized adaptive TSC and proposes a distributed multi-agent reinforcement learning (MARL) framework, where each agent in the system is a traffic intersection tasked to select the traffic phase of that intersection, based on locally-sensed traffic conditions and communication with its neighbors. However, due to the highly connected and interdependent nature between the intersections/agents, cooperation among these intersections is key to achieving the type of bottom-up, network-wide traffic optimization desired. To achieve this, this work proposes a novel social intrinsic reward mechanism to learn locally-cooperative traffic signal control policies. Counterfactually-predicted states, obtained using a learned dynamics model, are used to compute an intrinsic reward that captures the impact an agent's immediate actions has on its neighbouring agents's future state, thus encouraging locally-selfless behaviors. In contrast to simply sharing rewards among neighbors, which usually results in increased reward noise, our proposed intrinsic reward allows agents to explicitly assign credit to each other, leading to more stable, faster convergence to enhanced-cooperation policies. We present extensive comparisons results against state-of-the-art methods on the Manhattan 5x5 traffic network using the standard traffic simulator, SUMO. There, our results show that our proposed framework exhibits comparable or improved performance over state-of-the-art TSC baselines. Bachelor of Engineering (Mechanical Engineering) 2022-06-09T02:32:35Z 2022-06-09T02:32:35Z 2022 Final Year Project (FYP) Damani, M. (2022). Traffic signal control for optimized urban mobility. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/159038 https://hdl.handle.net/10356/159038 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Mechanical engineering
spellingShingle Engineering::Mechanical engineering
Damani, Mehul
Traffic signal control for optimized urban mobility
description The aim of traffic signal control (TSC) is to optimize vehicle traffic in urban road networks, via the control of traffic lights at intersections. Efficient traffic signal control can significantly reduce the detrimental impacts of traffic congestion, such as environmental pollution, passenger frustration and economic losses due to wasted time (e.g., surrounding delivery or emergency vehicles). At present, fixed-time controllers, which use offline data to fix the duration of traffic signal phases, remain the most widespread. However, urban traffic exhibits complex spatio-temporal patterns, such as peak congestion during the end of a workday. Fixed-time controllers, which have a pre-defined control rule, are unable to account for such dynamic patterns and as a result, there has been a recent push for adaptive traffic signal control methods which can dynamically adjust their control rule based on locally-sensed real-time traffic conditions. Reinforcement learning (RL) is one such adaptive and versatile data-driven method which has shown great promise in a variety of decision-making problems. Combined with deep learning, RL can be leveraged to learn powerful control policies for highly complex tasks. This work focuses on decentralized adaptive TSC and proposes a distributed multi-agent reinforcement learning (MARL) framework, where each agent in the system is a traffic intersection tasked to select the traffic phase of that intersection, based on locally-sensed traffic conditions and communication with its neighbors. However, due to the highly connected and interdependent nature between the intersections/agents, cooperation among these intersections is key to achieving the type of bottom-up, network-wide traffic optimization desired. To achieve this, this work proposes a novel social intrinsic reward mechanism to learn locally-cooperative traffic signal control policies. Counterfactually-predicted states, obtained using a learned dynamics model, are used to compute an intrinsic reward that captures the impact an agent's immediate actions has on its neighbouring agents's future state, thus encouraging locally-selfless behaviors. In contrast to simply sharing rewards among neighbors, which usually results in increased reward noise, our proposed intrinsic reward allows agents to explicitly assign credit to each other, leading to more stable, faster convergence to enhanced-cooperation policies. We present extensive comparisons results against state-of-the-art methods on the Manhattan 5x5 traffic network using the standard traffic simulator, SUMO. There, our results show that our proposed framework exhibits comparable or improved performance over state-of-the-art TSC baselines.
author2 Domenico Campolo
author_facet Domenico Campolo
Damani, Mehul
format Final Year Project
author Damani, Mehul
author_sort Damani, Mehul
title Traffic signal control for optimized urban mobility
title_short Traffic signal control for optimized urban mobility
title_full Traffic signal control for optimized urban mobility
title_fullStr Traffic signal control for optimized urban mobility
title_full_unstemmed Traffic signal control for optimized urban mobility
title_sort traffic signal control for optimized urban mobility
publisher Nanyang Technological University
publishDate 2022
url https://hdl.handle.net/10356/159038
_version_ 1759855750646071296