Learning to solve 3-D bin packing problem via deep reinforcement learning and constraint programming

Recently, there is a growing attention on applying deep reinforcement learning (DRL) to solve the 3-D bin packing problem (3-D BPP). However, due to the relatively less informative yet computationally heavy encoder, and considerably large action space inherent to the 3-D BPP, existing DRL methods ar...

Full description

Saved in:
Bibliographic Details
Main Authors: JIANG, Yuan, CAO, Zhiguang, ZHANG, Jie
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8152
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:Recently, there is a growing attention on applying deep reinforcement learning (DRL) to solve the 3-D bin packing problem (3-D BPP). However, due to the relatively less informative yet computationally heavy encoder, and considerably large action space inherent to the 3-D BPP, existing DRL methods are only able to handle up to 50 boxes. In this article, we propose to alleviate this issue via a DRL agent, which sequentially addresses three subtasks of sequence, orientation, and position, respectively. Specifically, we exploit a multimodal encoder, where a sparse attention subencoder embeds the box state to mitigate the computation while learning the packing policy, and a convolutional neural network subencoder embeds the view state to produce auxiliary spatial representation. We also leverage an action representation learning in the decoder to cope with the large action space of the position subtask. Besides, we integrate the proposed DRL agent into constraint programming (CP) to further improve the solution quality iteratively by exploiting the powerful search framework in CP. The experiments show that both the sole DRL and hybrid methods enable the agent to solve large-scale instances of 120 boxes or more. Moreover, they both could deliver superior performance against the baselines on instances of various scales.