Deep reinforcement learning for dynamic scheduling of a flexible job shop

The ability to handle unpredictable dynamic events is becoming more important in pursuing agile and flexible production scheduling. At the same time, the cyber-physical convergence in production system creates massive amounts of industrial data that needs to be mined and analysed in real-time. To fa...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلفون الرئيسيون: Liu, Renke, Piplani, Rajesh, Toro, Carlos
مؤلفون آخرون: School of Mechanical and Aerospace Engineering
التنسيق: مقال
اللغة:English
منشور في: 2022
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/163903
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
الوصف
الملخص:The ability to handle unpredictable dynamic events is becoming more important in pursuing agile and flexible production scheduling. At the same time, the cyber-physical convergence in production system creates massive amounts of industrial data that needs to be mined and analysed in real-time. To facilitate such real-time control, this research proposes a hierarchical and distributed architecture to solve the dynamic flexible job shop scheduling problem. Double Deep Q-Network algorithm is used to train the scheduling agents, to capture the relationship between production information and scheduling objectives, and make real-time scheduling decisions for a flexible job shop with constant job arrivals. Specialised state and action representations are proposed to handle the variable specification of the problem in dynamic scheduling. Additionally, a surrogate reward-shaping technique to improve learning efficiency and scheduling effectiveness is developed. A simulation study is carried out to validate the performance of the proposed approach under different scenarios. Numerical results show that not only does the proposed approach deliver superior performance as compared to existing scheduling strategies, its advantages persist even if the manufacturing system configuration changes.