Vision-based robotic grasping: developing a grasp planning algorithm for object manipulation

Grasp detection has become a pivotal aspect in robotic manipulation, allowing robots to identify specific points on an object for successful grasping. This report proposes a vision-based grasping algorithm capable of generating grasp poses on both single and multi-object scenarios. The proposed algo...

Full description

Saved in:
Bibliographic Details
Main Author: Thio, Zheng Yang
Other Authors: Chen I-Ming
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/177510
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-177510
record_format dspace
spelling sg-ntu-dr.10356-1775102024-06-01T16:51:50Z Vision-based robotic grasping: developing a grasp planning algorithm for object manipulation Thio, Zheng Yang Chen I-Ming School of Mechanical and Aerospace Engineering Robotics Research Centre MICHEN@ntu.edu.sg Engineering Vision Grasp detection has become a pivotal aspect in robotic manipulation, allowing robots to identify specific points on an object for successful grasping. This report proposes a vision-based grasping algorithm capable of generating grasp poses on both single and multi-object scenarios. The proposed algorithm was first elaborated in detail, discussing its architecture and usage of the Generative Residual Convolutional neural network (CNN) as a backbone, its hyperparameters, loss function and evaluation metrics. Secondly, the steps required to collect the custom dataset were elaborated to show the complexity and quality of the dataset. Thirdly, the proposed algorithm was then trained against the Cornell grasping dataset and different variations of the custom dataset. The models trained on the different dataset were then compared based on the validation and evaluation metrics and the generated grasp pose of different objects, multi-objects and lastly novel objects. In general, the models produced satisfactory results but there were still limitations which was elaborated. This project was also integrated with another Final Year Project that utilised ROS2 to develop a motion planning control module for a UR10 robotic arm. The integration of these two projects were used to perform pick-and-place tasks on static objects and it results were discussed. The report then ends with a summary of the project, mentioning the overall progress of the project and the results of the proposed algorithm. Future developments were also elaborated with the aim of addressing the existing limitations Bachelor's degree 2024-05-29T05:24:50Z 2024-05-29T05:24:50Z 2024 Final Year Project (FYP) Thio, Z. Y. (2024). Vision-based robotic grasping: developing a grasp planning algorithm for object manipulation. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/177510 https://hdl.handle.net/10356/177510 en C026 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering
Vision
spellingShingle Engineering
Vision
Thio, Zheng Yang
Vision-based robotic grasping: developing a grasp planning algorithm for object manipulation
description Grasp detection has become a pivotal aspect in robotic manipulation, allowing robots to identify specific points on an object for successful grasping. This report proposes a vision-based grasping algorithm capable of generating grasp poses on both single and multi-object scenarios. The proposed algorithm was first elaborated in detail, discussing its architecture and usage of the Generative Residual Convolutional neural network (CNN) as a backbone, its hyperparameters, loss function and evaluation metrics. Secondly, the steps required to collect the custom dataset were elaborated to show the complexity and quality of the dataset. Thirdly, the proposed algorithm was then trained against the Cornell grasping dataset and different variations of the custom dataset. The models trained on the different dataset were then compared based on the validation and evaluation metrics and the generated grasp pose of different objects, multi-objects and lastly novel objects. In general, the models produced satisfactory results but there were still limitations which was elaborated. This project was also integrated with another Final Year Project that utilised ROS2 to develop a motion planning control module for a UR10 robotic arm. The integration of these two projects were used to perform pick-and-place tasks on static objects and it results were discussed. The report then ends with a summary of the project, mentioning the overall progress of the project and the results of the proposed algorithm. Future developments were also elaborated with the aim of addressing the existing limitations
author2 Chen I-Ming
author_facet Chen I-Ming
Thio, Zheng Yang
format Final Year Project
author Thio, Zheng Yang
author_sort Thio, Zheng Yang
title Vision-based robotic grasping: developing a grasp planning algorithm for object manipulation
title_short Vision-based robotic grasping: developing a grasp planning algorithm for object manipulation
title_full Vision-based robotic grasping: developing a grasp planning algorithm for object manipulation
title_fullStr Vision-based robotic grasping: developing a grasp planning algorithm for object manipulation
title_full_unstemmed Vision-based robotic grasping: developing a grasp planning algorithm for object manipulation
title_sort vision-based robotic grasping: developing a grasp planning algorithm for object manipulation
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/177510
_version_ 1806059886000209920