Practical learning synergies between pushing and grasping based on DRL

This paper focuses on the comparison of performance of the intelligent robot manipulation systems based on different deep reinforcement learning technologies. An ideal strategy for robotic manipulation involves two primary components: non- prehensile actions, such as pushing, and prehensile actions,...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Huang, Yuanning
مؤلفون آخرون: Wen Bihan
التنسيق: Thesis-Master by Coursework
اللغة:English
منشور في: Nanyang Technological University 2024
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/175513
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
المؤسسة: Nanyang Technological University
اللغة: English
id sg-ntu-dr.10356-175513
record_format dspace
spelling sg-ntu-dr.10356-1755132024-04-26T16:01:01Z Practical learning synergies between pushing and grasping based on DRL Huang, Yuanning Wen Bihan School of Electrical and Electronic Engineering bihan.wen@ntu.edu.sg Computer and Information Science Robot manipulations Deep reinforcement learning Deep Q network This paper focuses on the comparison of performance of the intelligent robot manipulation systems based on different deep reinforcement learning technologies. An ideal strategy for robotic manipulation involves two primary components: non- prehensile actions, such as pushing, and prehensile actions, such as grasping. Both pushing and grasping play pivotal roles in enhancing the efficiency of robotic manipulation. Pushing actions can effectively separate clustered objects, creating room for robotic grippers to grab target item, while grasping can assist in relocating items to facilitate more precise pushing and prevent collisions. Therefore, it is essential to explore synergies between these two fundamental actions. Reviewing literature in related fields reveals that developing synergies between pushing and grasping from the ground up using model-free deep reinforcement learning is feasible. One successful approach involves training two neural networks simultaneously within a standard DQN framework, relying entirely on self-supervised learning through trial and error, with rewards contingent upon successful grasps. Drawing inspiration from this effective methodology, this paper introduces the dueling DQN framework as a potential enhancement, aiming for integrated performance comparison. More complicated reinforcement learning frameworks can help boost the overall performance of self-supervised robotic manipulation systems. Most importantly, these experiments can work as a clear demonstration of the connection between the efficiency of intelligent robotic manipulations and the complexity of deep reinforcement learning models, which will inspire people to think about other more advanced DRL algorithms. Master's degree 2024-04-26T06:23:38Z 2024-04-26T06:23:38Z 2024 Thesis-Master by Coursework Huang, Y. (2024). Practical learning synergies between pushing and grasping based on DRL. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175513 https://hdl.handle.net/10356/175513 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Robot manipulations
Deep reinforcement learning
Deep Q network
spellingShingle Computer and Information Science
Robot manipulations
Deep reinforcement learning
Deep Q network
Huang, Yuanning
Practical learning synergies between pushing and grasping based on DRL
description This paper focuses on the comparison of performance of the intelligent robot manipulation systems based on different deep reinforcement learning technologies. An ideal strategy for robotic manipulation involves two primary components: non- prehensile actions, such as pushing, and prehensile actions, such as grasping. Both pushing and grasping play pivotal roles in enhancing the efficiency of robotic manipulation. Pushing actions can effectively separate clustered objects, creating room for robotic grippers to grab target item, while grasping can assist in relocating items to facilitate more precise pushing and prevent collisions. Therefore, it is essential to explore synergies between these two fundamental actions. Reviewing literature in related fields reveals that developing synergies between pushing and grasping from the ground up using model-free deep reinforcement learning is feasible. One successful approach involves training two neural networks simultaneously within a standard DQN framework, relying entirely on self-supervised learning through trial and error, with rewards contingent upon successful grasps. Drawing inspiration from this effective methodology, this paper introduces the dueling DQN framework as a potential enhancement, aiming for integrated performance comparison. More complicated reinforcement learning frameworks can help boost the overall performance of self-supervised robotic manipulation systems. Most importantly, these experiments can work as a clear demonstration of the connection between the efficiency of intelligent robotic manipulations and the complexity of deep reinforcement learning models, which will inspire people to think about other more advanced DRL algorithms.
author2 Wen Bihan
author_facet Wen Bihan
Huang, Yuanning
format Thesis-Master by Coursework
author Huang, Yuanning
author_sort Huang, Yuanning
title Practical learning synergies between pushing and grasping based on DRL
title_short Practical learning synergies between pushing and grasping based on DRL
title_full Practical learning synergies between pushing and grasping based on DRL
title_fullStr Practical learning synergies between pushing and grasping based on DRL
title_full_unstemmed Practical learning synergies between pushing and grasping based on DRL
title_sort practical learning synergies between pushing and grasping based on drl
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/175513
_version_ 1806059742242537472