Dynamic job shop scheduling using deep reinforcement learning

This FYP project aims to improve on the make span in dynamic job shop scheduling using deep reinforcement learning techniques and testing it with different neural network configurations and comparing the results with heuristic methods. The deep reinforcement learning algorithm is mainly Rainbow Deep...

Full description

Saved in:
Bibliographic Details
Main Author: Tan, Hong Ming
Other Authors: Shu Jian Jun
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/177529
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:This FYP project aims to improve on the make span in dynamic job shop scheduling using deep reinforcement learning techniques and testing it with different neural network configurations and comparing the results with heuristic methods. The deep reinforcement learning algorithm is mainly Rainbow Deep Q Learning without multistep learning and distributional Deep Q Learning (RDQN) and testing with a combination of Convolutional 1D neural networks (CNN1D), LSTM, and Dense. It is found that RDQN with CNN1D gives the best make span when trained with the job shop which closely represents real life process flow and is tested against the job shop with varying process time to the job shop in which it was trained in and a varying number of jobs and machines is also tested. The result is compared with other heuristic methods as well as different configurations for the neural network structure.