Online control of adaptive large neighborhood search using deep reinforcement learning

The Adaptive Large Neighborhood Search (ALNS) algorithm has shown considerable success in solving combinatorial optimization problems (COPs). Nonetheless, the performance of ALNS relies on the proper configuration of its selection and acceptance parameters, which is known to be a complex and resourc...

Full description

Saved in:
Bibliographic Details
Main Authors: REIJNEN, Reijnen, ZHANG, Yingqian, LAU, Hoong Chuin, BUKHSH, Zaharah
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9893
https://ink.library.smu.edu.sg/context/sis_research/article/10893/viewcontent/2211.00759v3.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10893
record_format dspace
spelling sg-smu-ink.sis_research-108932025-01-02T09:05:37Z Online control of adaptive large neighborhood search using deep reinforcement learning REIJNEN, Reijnen ZHANG, Yingqian LAU, Hoong Chuin BUKHSH, Zaharah The Adaptive Large Neighborhood Search (ALNS) algorithm has shown considerable success in solving combinatorial optimization problems (COPs). Nonetheless, the performance of ALNS relies on the proper configuration of its selection and acceptance parameters, which is known to be a complex and resource-intensive task. To address this, we introduce a Deep Reinforcement Learning (DRL) based approach called DR-ALNS that selects operators, adjusts parameters, and controls the acceptance criterion throughout the search. The proposed method aims to learn, based on the state of the search, to configure ALNS for the next iteration to yield more effective solutions for the given optimization problem. We evaluate the proposed method on an orienteering problem with stochastic weights and time windows, as presented in an IJCAI competition. The results show that our approach outperforms vanilla ALNS, ALNS tuned with Bayesian optimization, and two state-of-the-art DRL approaches that were the winning methods of the competition, achieving this with significantly fewer training observations. Furthermore, we demonstrate several good properties of the proposed DR-ALNS method: it is easily adapted to solve different routing problems, its learned policies perform consistently well across various instance sizes, and these policies can be directly applied to different problem variants. 2024-05-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9893 info:doi/10.1609/icaps.v34i1.31507 https://ink.library.smu.edu.sg/context/sis_research/article/10893/viewcontent/2211.00759v3.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Deep reinforcement learning Adaptive large neighborhood search Algorithm configuration Artificial Intelligence and Robotics Computer Sciences
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Deep reinforcement learning
Adaptive large neighborhood search
Algorithm configuration
Artificial Intelligence and Robotics
Computer Sciences
spellingShingle Deep reinforcement learning
Adaptive large neighborhood search
Algorithm configuration
Artificial Intelligence and Robotics
Computer Sciences
REIJNEN, Reijnen
ZHANG, Yingqian
LAU, Hoong Chuin
BUKHSH, Zaharah
Online control of adaptive large neighborhood search using deep reinforcement learning
description The Adaptive Large Neighborhood Search (ALNS) algorithm has shown considerable success in solving combinatorial optimization problems (COPs). Nonetheless, the performance of ALNS relies on the proper configuration of its selection and acceptance parameters, which is known to be a complex and resource-intensive task. To address this, we introduce a Deep Reinforcement Learning (DRL) based approach called DR-ALNS that selects operators, adjusts parameters, and controls the acceptance criterion throughout the search. The proposed method aims to learn, based on the state of the search, to configure ALNS for the next iteration to yield more effective solutions for the given optimization problem. We evaluate the proposed method on an orienteering problem with stochastic weights and time windows, as presented in an IJCAI competition. The results show that our approach outperforms vanilla ALNS, ALNS tuned with Bayesian optimization, and two state-of-the-art DRL approaches that were the winning methods of the competition, achieving this with significantly fewer training observations. Furthermore, we demonstrate several good properties of the proposed DR-ALNS method: it is easily adapted to solve different routing problems, its learned policies perform consistently well across various instance sizes, and these policies can be directly applied to different problem variants.
format text
author REIJNEN, Reijnen
ZHANG, Yingqian
LAU, Hoong Chuin
BUKHSH, Zaharah
author_facet REIJNEN, Reijnen
ZHANG, Yingqian
LAU, Hoong Chuin
BUKHSH, Zaharah
author_sort REIJNEN, Reijnen
title Online control of adaptive large neighborhood search using deep reinforcement learning
title_short Online control of adaptive large neighborhood search using deep reinforcement learning
title_full Online control of adaptive large neighborhood search using deep reinforcement learning
title_fullStr Online control of adaptive large neighborhood search using deep reinforcement learning
title_full_unstemmed Online control of adaptive large neighborhood search using deep reinforcement learning
title_sort online control of adaptive large neighborhood search using deep reinforcement learning
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9893
https://ink.library.smu.edu.sg/context/sis_research/article/10893/viewcontent/2211.00759v3.pdf
_version_ 1821237276898754560