Simulation of bin picking problem based on deep reinforcement learning

The application of deep reinforcement learning (DRL) has become prevalent in many fields and has proven to be effective in solving numerous problems in the robotics industry. This article proposes a simulation framework on the CoppliaSim platform that implements DRL algorithms to tackle bin picki...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Sun, Chaoyu
مؤلفون آخرون: Wen Bihan
التنسيق: Thesis-Master by Coursework
اللغة:English
منشور في: Nanyang Technological University 2023
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/167786
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
الوصف
الملخص:The application of deep reinforcement learning (DRL) has become prevalent in many fields and has proven to be effective in solving numerous problems in the robotics industry. This article proposes a simulation framework on the CoppliaSim platform that implements DRL algorithms to tackle bin picking tasks. Our approach involves training two fully convoluted networks that map the visual observations to the action. One network evaluates the effectiveness of pushing across different end-effector directions and locations in dense pixellevel sampling, while the other network does the same for the grasping action. Both networks are jointly trained within the q-learning framework and are fully self-supervised through trials and errors. Successful grasps serve as rewards for this training process. To carry out the simulation experiment, we used a video file generated by the simulation platform, showing a robot arm picking up an object. By applying the DRL algorithm, the robot arm learned how to autonomously perform the task of grasping the object through practice. The simulation results demonstrate that our system can rapidly acquire complex behaviors, even in challenging cases of clutter, and outperforms the baseline in terms of grasping success rates and picking efficiencies.