DCL-AIM : decentralized coordination learning of autonomous intersection management for connected and automated vehicles

Conventional intersection managements, such as signalized intersections, may not necessarily be the optimal strategies when it comes to connected and automated vehicles (CAVs) environment. Autonomous intersection management (AIM) is tailored for CAVs aiming at replacing the conventional traffic cont...

Full description

Saved in:
Bibliographic Details
Main Authors: Wu, Yuanyuan, Chen, Haipeng, Zhu, Feng
Other Authors: School of Civil and Environmental Engineering
Format: Article
Language:English
Published: 2020
Subjects:
Online Access:https://hdl.handle.net/10356/143864
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Conventional intersection managements, such as signalized intersections, may not necessarily be the optimal strategies when it comes to connected and automated vehicles (CAVs) environment. Autonomous intersection management (AIM) is tailored for CAVs aiming at replacing the conventional traffic control strategies. In this work, using the communication and computation technologies of CAVs, the sequential movements of vehicles through intersections are modelled as multi-agent Markov decision processes (MAMDPs) in which vehicle agents cooperate to minimize intersection delay with collision-free constraints. To handle the huge dimension scale incurred by the nature of multi-agent decision making problems, the state space of CAVs are decomposed into independent part and coordinated part by exploiting the structural properties of the AIM problem, and a decentralized coordination multi-agent learning approach (DCL-AIM) is proposed to solve the problem efficiently by exploiting both global and localized agent coordination needs in AIM. The main feature of the proposed approach is to explicitly identify and dynamically adapt agent coordination needs during the learning process so that the curse of dimensionality and environment nonstationarity problems in multi-agent learning can be alleviated. The effectiveness of the proposed method is demonstrated under a variety of traffic conditions. The comparison analysis is performed between DCL-AIM and the First-Come-First-Serve based AIM (FCFS-AIM), with Longest-Queue-First (LQF-AIM) policy and the signal control based on the Webster’s method (Signal) as benchmarks. Experimental results show that the sequential decisions from DCL-AIM outperform the other control policies.