ML-MMAS: self-learning ant colony optimization for multi-criteria journey planning

Ant Colony Optimization (ACO) algorithms have been widely employed for solving optimization problems. Their ability to find optimal solutions depends heavily on the parameterization of the pheromone trails. However, the pheromone parameterization mechanisms in existing ACO algorithms have two major...

Full description

Saved in:
Bibliographic Details
Main Authors: He, Peilan, Jiang, Guiyuan, Lam, Siew-Kei, Sun, Yidan
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/10356/163885
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-163885
record_format dspace
spelling sg-ntu-dr.10356-1638852022-12-21T04:21:57Z ML-MMAS: self-learning ant colony optimization for multi-criteria journey planning He, Peilan Jiang, Guiyuan Lam, Siew-Kei Sun, Yidan School of Computer Science and Engineering Engineering::Computer science and engineering Pheromone Model Self-Learning Ant Colony Optimization (ACO) algorithms have been widely employed for solving optimization problems. Their ability to find optimal solutions depends heavily on the parameterization of the pheromone trails. However, the pheromone parameterization mechanisms in existing ACO algorithms have two major shortcomings: 1) pheromone trails are instance-specific; hence they need to be generated for each new problem instance, 2) solution is constructed based on static pheromone trails, which ignores the impact of the evolving decisions on the final solution. In this paper, we study the personalized journey route planning problem on multimodal public transport networks (MMPTN) that considers multiple travel criteria. The problem is addressed with a weighted sum method, which provides a journey route that best matches passenger's preference vector consisting of multiple travel criteria. We propose a Machine Learning (ML) based Max–Min Ant System (called ML-MMAS) to solve optimization problems by incorporating ML techniques into Ant Colony Optimization algorithms. ML-MMAS learns a pheromone function to directly produce prominent pheromone trails in order to construct solutions for any new instance, without the need to initialize and update the pheromone trails from scratch. We propose a self-learning framework to train the ML-MMAS using incremental solutions generated by MMAS, hence avoiding the need for pre-computed optimal solutions. Specifically, we develop a deep learning-based pheromone prediction model. We design several groups of features to train the model to characterize the evolving states of the search space during solution construction. Finally, we propose a solution component embedding (SCE) model to learn representations of solution components (transit services), which takes into account the transferability among transit services and passenger transfer preferences. The SCE model enables the extraction of high-quality features for the solution components. It can also be directly applied to solve other optimization problems with solutions that can be modeled as sequences of solution components. We evaluate the proposed ML-MMAS by comparing with exact algorithms and the underlying MMAS, using the MMPTN and passenger demands of Singapore. Results show that ML-MMAS is significantly faster than both the exact algorithm and the original MMAS, while achieving near-optimal solutions. National Research Foundation (NRF) This research project is supported in part by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme with the Technical University of Munich at TUMCREATE. 2022-12-21T04:21:57Z 2022-12-21T04:21:57Z 2022 Journal Article He, P., Jiang, G., Lam, S. & Sun, Y. (2022). ML-MMAS: self-learning ant colony optimization for multi-criteria journey planning. Information Sciences, 609, 1052-1074. https://dx.doi.org/10.1016/j.ins.2022.07.150 0020-0255 https://hdl.handle.net/10356/163885 10.1016/j.ins.2022.07.150 2-s2.0-85135376186 609 1052 1074 en Information Sciences © 2022 Elsevier Inc. All rights reserved.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Pheromone Model
Self-Learning
spellingShingle Engineering::Computer science and engineering
Pheromone Model
Self-Learning
He, Peilan
Jiang, Guiyuan
Lam, Siew-Kei
Sun, Yidan
ML-MMAS: self-learning ant colony optimization for multi-criteria journey planning
description Ant Colony Optimization (ACO) algorithms have been widely employed for solving optimization problems. Their ability to find optimal solutions depends heavily on the parameterization of the pheromone trails. However, the pheromone parameterization mechanisms in existing ACO algorithms have two major shortcomings: 1) pheromone trails are instance-specific; hence they need to be generated for each new problem instance, 2) solution is constructed based on static pheromone trails, which ignores the impact of the evolving decisions on the final solution. In this paper, we study the personalized journey route planning problem on multimodal public transport networks (MMPTN) that considers multiple travel criteria. The problem is addressed with a weighted sum method, which provides a journey route that best matches passenger's preference vector consisting of multiple travel criteria. We propose a Machine Learning (ML) based Max–Min Ant System (called ML-MMAS) to solve optimization problems by incorporating ML techniques into Ant Colony Optimization algorithms. ML-MMAS learns a pheromone function to directly produce prominent pheromone trails in order to construct solutions for any new instance, without the need to initialize and update the pheromone trails from scratch. We propose a self-learning framework to train the ML-MMAS using incremental solutions generated by MMAS, hence avoiding the need for pre-computed optimal solutions. Specifically, we develop a deep learning-based pheromone prediction model. We design several groups of features to train the model to characterize the evolving states of the search space during solution construction. Finally, we propose a solution component embedding (SCE) model to learn representations of solution components (transit services), which takes into account the transferability among transit services and passenger transfer preferences. The SCE model enables the extraction of high-quality features for the solution components. It can also be directly applied to solve other optimization problems with solutions that can be modeled as sequences of solution components. We evaluate the proposed ML-MMAS by comparing with exact algorithms and the underlying MMAS, using the MMPTN and passenger demands of Singapore. Results show that ML-MMAS is significantly faster than both the exact algorithm and the original MMAS, while achieving near-optimal solutions.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
He, Peilan
Jiang, Guiyuan
Lam, Siew-Kei
Sun, Yidan
format Article
author He, Peilan
Jiang, Guiyuan
Lam, Siew-Kei
Sun, Yidan
author_sort He, Peilan
title ML-MMAS: self-learning ant colony optimization for multi-criteria journey planning
title_short ML-MMAS: self-learning ant colony optimization for multi-criteria journey planning
title_full ML-MMAS: self-learning ant colony optimization for multi-criteria journey planning
title_fullStr ML-MMAS: self-learning ant colony optimization for multi-criteria journey planning
title_full_unstemmed ML-MMAS: self-learning ant colony optimization for multi-criteria journey planning
title_sort ml-mmas: self-learning ant colony optimization for multi-criteria journey planning
publishDate 2022
url https://hdl.handle.net/10356/163885
_version_ 1753801164895289344