A simulation-based reinforcement learning solution for a dynamic mixed-model assembly line sequencing problem

The Assembly-to-order production strategy is widely used to fulfill the growing demand for customization while balancing production costs, particularly in the Electric Vehicles industry. To implement Assembly-to-order, a corresponding production arrangement known as the Mixed-Model Assembly Line is...

Full description

Saved in:
Bibliographic Details
Main Author: Yu, Dongsheng
Other Authors: Chen Songlin
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/166638
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:The Assembly-to-order production strategy is widely used to fulfill the growing demand for customization while balancing production costs, particularly in the Electric Vehicles industry. To implement Assembly-to-order, a corresponding production arrangement known as the Mixed-Model Assembly Line is utilized. How to achieve dynamic sequencing to reduce changeover time and enhance throughput, given stochastic demand and waiting threshold, requires further investigation. This dissertation addresses this challenge by utilizing simulation-based reinforcement learning to achieve dynamic sequencing. It had a higher throughput than the benchmarks solution of First-in-first-out, Fixed Batch Size, and Arrival Frequency-based Batch Size. Moreover, the simulation based on actual Mixed-Model Assembly Line layouts provides an interactive environment for an intelligent agent in reinforcement learning, enabling it to learn near-optimal policies without affecting actual production. The learned policies can then be implemented in real-time sequencing to enhance the performance of the actual assembly line.