Simulation of bin picking problem based on deep reinforcement learning

The application of deep reinforcement learning (DRL) has become prevalent in many fields and has proven to be effective in solving numerous problems in the robotics industry. This article proposes a simulation framework on the CoppliaSim platform that implements DRL algorithms to tackle bin picki...

全面介紹

Saved in:
書目詳細資料
主要作者: Sun, Chaoyu
其他作者: Wen Bihan
格式: Thesis-Master by Coursework
語言:English
出版: Nanyang Technological University 2023
主題:
在線閱讀:https://hdl.handle.net/10356/167786
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:The application of deep reinforcement learning (DRL) has become prevalent in many fields and has proven to be effective in solving numerous problems in the robotics industry. This article proposes a simulation framework on the CoppliaSim platform that implements DRL algorithms to tackle bin picking tasks. Our approach involves training two fully convoluted networks that map the visual observations to the action. One network evaluates the effectiveness of pushing across different end-effector directions and locations in dense pixellevel sampling, while the other network does the same for the grasping action. Both networks are jointly trained within the q-learning framework and are fully self-supervised through trials and errors. Successful grasps serve as rewards for this training process. To carry out the simulation experiment, we used a video file generated by the simulation platform, showing a robot arm picking up an object. By applying the DRL algorithm, the robot arm learned how to autonomously perform the task of grasping the object through practice. The simulation results demonstrate that our system can rapidly acquire complex behaviors, even in challenging cases of clutter, and outperforms the baseline in terms of grasping success rates and picking efficiencies.