Deep reinforcement learning guided improvement heuristic for job shop scheduling

Recent studies in using deep reinforcement learning (DRL) to solve Job-shop scheduling problems (JSSP) focus on construction heuristics. However, their performance is still far from optimality, mainly because the underlying graph representation scheme is unsuitable for modelling partial solutions at...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHANG, Cong, CAO, Zhiguang, SONG, Wen, WU, Yaoxin, ZHANG, Jie
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9329
https://ink.library.smu.edu.sg/context/sis_research/article/10329/viewcontent/1334_Deep_Reinforcement_Learni.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10329
record_format dspace
spelling sg-smu-ink.sis_research-103292024-09-26T07:42:08Z Deep reinforcement learning guided improvement heuristic for job shop scheduling ZHANG, Cong CAO, Zhiguang SONG, Wen WU, Yaoxin ZHANG, Jie Recent studies in using deep reinforcement learning (DRL) to solve Job-shop scheduling problems (JSSP) focus on construction heuristics. However, their performance is still far from optimality, mainly because the underlying graph representation scheme is unsuitable for modelling partial solutions at each construction step. This paper proposes a novel DRL-guided improvement heuristic for solving JSSP, where graph representation is employed to encode complete solutions. We design a Graph-Neural-Network-based representation scheme, consisting of two modules to effectively capture the information of dynamic topology and different types of nodes in graphs encountered during the improvement process. To speed up solution evaluation during improvement, we present a novel message-passing mechanism that can evaluate multiple solutions simultaneously. We prove that the computational complexity of our method scales linearly with problem size. Experiments on classic benchmarks show that the improvement policy learned by our method outperforms state-of-the-art DRL-based methods by a large margin. 2024-05-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9329 https://ink.library.smu.edu.sg/context/sis_research/article/10329/viewcontent/1334_Deep_Reinforcement_Learni.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Deep Reinforcement Learning Graph Neural Network Job Shop Scheduling Combinatorial Optimization Graphics and Human Computer Interfaces OS and Networks
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Deep Reinforcement Learning
Graph Neural Network
Job Shop Scheduling
Combinatorial Optimization
Graphics and Human Computer Interfaces
OS and Networks
spellingShingle Deep Reinforcement Learning
Graph Neural Network
Job Shop Scheduling
Combinatorial Optimization
Graphics and Human Computer Interfaces
OS and Networks
ZHANG, Cong
CAO, Zhiguang
SONG, Wen
WU, Yaoxin
ZHANG, Jie
Deep reinforcement learning guided improvement heuristic for job shop scheduling
description Recent studies in using deep reinforcement learning (DRL) to solve Job-shop scheduling problems (JSSP) focus on construction heuristics. However, their performance is still far from optimality, mainly because the underlying graph representation scheme is unsuitable for modelling partial solutions at each construction step. This paper proposes a novel DRL-guided improvement heuristic for solving JSSP, where graph representation is employed to encode complete solutions. We design a Graph-Neural-Network-based representation scheme, consisting of two modules to effectively capture the information of dynamic topology and different types of nodes in graphs encountered during the improvement process. To speed up solution evaluation during improvement, we present a novel message-passing mechanism that can evaluate multiple solutions simultaneously. We prove that the computational complexity of our method scales linearly with problem size. Experiments on classic benchmarks show that the improvement policy learned by our method outperforms state-of-the-art DRL-based methods by a large margin.
format text
author ZHANG, Cong
CAO, Zhiguang
SONG, Wen
WU, Yaoxin
ZHANG, Jie
author_facet ZHANG, Cong
CAO, Zhiguang
SONG, Wen
WU, Yaoxin
ZHANG, Jie
author_sort ZHANG, Cong
title Deep reinforcement learning guided improvement heuristic for job shop scheduling
title_short Deep reinforcement learning guided improvement heuristic for job shop scheduling
title_full Deep reinforcement learning guided improvement heuristic for job shop scheduling
title_fullStr Deep reinforcement learning guided improvement heuristic for job shop scheduling
title_full_unstemmed Deep reinforcement learning guided improvement heuristic for job shop scheduling
title_sort deep reinforcement learning guided improvement heuristic for job shop scheduling
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9329
https://ink.library.smu.edu.sg/context/sis_research/article/10329/viewcontent/1334_Deep_Reinforcement_Learni.pdf
_version_ 1814047911180238848