Simulation of bin picking problem based on deep reinforcement learning
The application of deep reinforcement learning (DRL) has become prevalent in many fields and has proven to be effective in solving numerous problems in the robotics industry. This article proposes a simulation framework on the CoppliaSim platform that implements DRL algorithms to tackle bin picki...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/167786 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-167786 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1677862023-07-04T16:23:50Z Simulation of bin picking problem based on deep reinforcement learning Sun, Chaoyu Wen Bihan School of Electrical and Electronic Engineering bihan.wen@ntu.edu.sg Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics The application of deep reinforcement learning (DRL) has become prevalent in many fields and has proven to be effective in solving numerous problems in the robotics industry. This article proposes a simulation framework on the CoppliaSim platform that implements DRL algorithms to tackle bin picking tasks. Our approach involves training two fully convoluted networks that map the visual observations to the action. One network evaluates the effectiveness of pushing across different end-effector directions and locations in dense pixellevel sampling, while the other network does the same for the grasping action. Both networks are jointly trained within the q-learning framework and are fully self-supervised through trials and errors. Successful grasps serve as rewards for this training process. To carry out the simulation experiment, we used a video file generated by the simulation platform, showing a robot arm picking up an object. By applying the DRL algorithm, the robot arm learned how to autonomously perform the task of grasping the object through practice. The simulation results demonstrate that our system can rapidly acquire complex behaviors, even in challenging cases of clutter, and outperforms the baseline in terms of grasping success rates and picking efficiencies. Master of Science (Computer Control and Automation) 2023-05-18T05:42:02Z 2023-05-18T05:42:02Z 2023 Thesis-Master by Coursework Sun, C. (2023). Simulation of bin picking problem based on deep reinforcement learning. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/167786 https://hdl.handle.net/10356/167786 en application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics |
spellingShingle |
Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics Sun, Chaoyu Simulation of bin picking problem based on deep reinforcement learning |
description |
The application of deep reinforcement learning (DRL) has become prevalent in
many fields and has proven to be effective in solving numerous problems in
the robotics industry. This article proposes a simulation framework on the CoppliaSim
platform that implements DRL algorithms to tackle bin picking tasks.
Our approach involves training two fully convoluted networks that map the visual
observations to the action. One network evaluates the effectiveness of
pushing across different end-effector directions and locations in dense pixellevel
sampling, while the other network does the same for the grasping action.
Both networks are jointly trained within the q-learning framework and
are fully self-supervised through trials and errors. Successful grasps serve as
rewards for this training process. To carry out the simulation experiment, we
used a video file generated by the simulation platform, showing a robot arm
picking up an object. By applying the DRL algorithm, the robot arm learned
how to autonomously perform the task of grasping the object through practice.
The simulation results demonstrate that our system can rapidly acquire complex
behaviors, even in challenging cases of clutter, and outperforms the baseline in
terms of grasping success rates and picking efficiencies. |
author2 |
Wen Bihan |
author_facet |
Wen Bihan Sun, Chaoyu |
format |
Thesis-Master by Coursework |
author |
Sun, Chaoyu |
author_sort |
Sun, Chaoyu |
title |
Simulation of bin picking problem based on deep reinforcement learning |
title_short |
Simulation of bin picking problem based on deep reinforcement learning |
title_full |
Simulation of bin picking problem based on deep reinforcement learning |
title_fullStr |
Simulation of bin picking problem based on deep reinforcement learning |
title_full_unstemmed |
Simulation of bin picking problem based on deep reinforcement learning |
title_sort |
simulation of bin picking problem based on deep reinforcement learning |
publisher |
Nanyang Technological University |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/167786 |
_version_ |
1772827287907991552 |