Deep reinforcement learning for dynamic scheduling of a flexible job shop

The ability to handle unpredictable dynamic events is becoming more important in pursuing agile and flexible production scheduling. At the same time, the cyber-physical convergence in production system creates massive amounts of industrial data that needs to be mined and analysed in real-time. To fa...

Full description

Saved in:
Bibliographic Details
Main Authors: Liu, Renke, Piplani, Rajesh, Toro, Carlos
Other Authors: School of Mechanical and Aerospace Engineering
Format: Article
Language:English
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/10356/163903
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-163903
record_format dspace
spelling sg-ntu-dr.10356-1639032022-12-21T07:19:35Z Deep reinforcement learning for dynamic scheduling of a flexible job shop Liu, Renke Piplani, Rajesh Toro, Carlos School of Mechanical and Aerospace Engineering Engineering::Industrial engineering Dynamic Scheduling Flexible Job Shop The ability to handle unpredictable dynamic events is becoming more important in pursuing agile and flexible production scheduling. At the same time, the cyber-physical convergence in production system creates massive amounts of industrial data that needs to be mined and analysed in real-time. To facilitate such real-time control, this research proposes a hierarchical and distributed architecture to solve the dynamic flexible job shop scheduling problem. Double Deep Q-Network algorithm is used to train the scheduling agents, to capture the relationship between production information and scheduling objectives, and make real-time scheduling decisions for a flexible job shop with constant job arrivals. Specialised state and action representations are proposed to handle the variable specification of the problem in dynamic scheduling. Additionally, a surrogate reward-shaping technique to improve learning efficiency and scheduling effectiveness is developed. A simulation study is carried out to validate the performance of the proposed approach under different scenarios. Numerical results show that not only does the proposed approach deliver superior performance as compared to existing scheduling strategies, its advantages persist even if the manufacturing system configuration changes. 2022-12-21T07:19:35Z 2022-12-21T07:19:35Z 2022 Journal Article Liu, R., Piplani, R. & Toro, C. (2022). Deep reinforcement learning for dynamic scheduling of a flexible job shop. International Journal of Production Research, 60(13), 4049-4069. https://dx.doi.org/10.1080/00207543.2022.2058432 0020-7543 https://hdl.handle.net/10356/163903 10.1080/00207543.2022.2058432 2-s2.0-85129220142 13 60 4049 4069 en International Journal of Production Research © 2022 Informa UK Limited, trading as Taylor & Francis Group. All rights reserved.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Industrial engineering
Dynamic Scheduling
Flexible Job Shop
spellingShingle Engineering::Industrial engineering
Dynamic Scheduling
Flexible Job Shop
Liu, Renke
Piplani, Rajesh
Toro, Carlos
Deep reinforcement learning for dynamic scheduling of a flexible job shop
description The ability to handle unpredictable dynamic events is becoming more important in pursuing agile and flexible production scheduling. At the same time, the cyber-physical convergence in production system creates massive amounts of industrial data that needs to be mined and analysed in real-time. To facilitate such real-time control, this research proposes a hierarchical and distributed architecture to solve the dynamic flexible job shop scheduling problem. Double Deep Q-Network algorithm is used to train the scheduling agents, to capture the relationship between production information and scheduling objectives, and make real-time scheduling decisions for a flexible job shop with constant job arrivals. Specialised state and action representations are proposed to handle the variable specification of the problem in dynamic scheduling. Additionally, a surrogate reward-shaping technique to improve learning efficiency and scheduling effectiveness is developed. A simulation study is carried out to validate the performance of the proposed approach under different scenarios. Numerical results show that not only does the proposed approach deliver superior performance as compared to existing scheduling strategies, its advantages persist even if the manufacturing system configuration changes.
author2 School of Mechanical and Aerospace Engineering
author_facet School of Mechanical and Aerospace Engineering
Liu, Renke
Piplani, Rajesh
Toro, Carlos
format Article
author Liu, Renke
Piplani, Rajesh
Toro, Carlos
author_sort Liu, Renke
title Deep reinforcement learning for dynamic scheduling of a flexible job shop
title_short Deep reinforcement learning for dynamic scheduling of a flexible job shop
title_full Deep reinforcement learning for dynamic scheduling of a flexible job shop
title_fullStr Deep reinforcement learning for dynamic scheduling of a flexible job shop
title_full_unstemmed Deep reinforcement learning for dynamic scheduling of a flexible job shop
title_sort deep reinforcement learning for dynamic scheduling of a flexible job shop
publishDate 2022
url https://hdl.handle.net/10356/163903
_version_ 1753801109801009152