Intelligent robotic grasping and manipulation system with deep learning

Random object grasping is a crucial problem in robotics which is yet to be solved. Typically, vision-based robotic grasping can be classified into two approaches, 2D planar grasp and 6-DoF (degree of freedom) grasp. In this project, the focus will be on the prediction of 6-DoF grasp poses based o...

Full description

Saved in:
Bibliographic Details
Main Author: Chu, You-Rui
Other Authors: Lin Zhiping
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/158029
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-158029
record_format dspace
spelling sg-ntu-dr.10356-1580292023-07-07T19:29:48Z Intelligent robotic grasping and manipulation system with deep learning Chu, You-Rui Lin Zhiping School of Electrical and Electronic Engineering Singapore Institute of Manufacturing Technology Zhu Haiyue EZPLin@ntu.edu.sg, zhu_haiyue@simtech.a-star.edu.sg Engineering::Electrical and electronic engineering Random object grasping is a crucial problem in robotics which is yet to be solved. Typically, vision-based robotic grasping can be classified into two approaches, 2D planar grasp and 6-DoF (degree of freedom) grasp. In this project, the focus will be on the prediction of 6-DoF grasp poses based on RGB-D images. Most of the current approaches for 6-DoF grasp are generated from point clouds or unstable depth images, which may lead to undesirable results in some cases. The proposed method divides the 6-DoF grasp detection into three sub-stages. The first stage is the LocNet, a convolutional-based encoder-decoder neural network to predict the location of the objects in the image. Besides, ViewAngleNet is also a convolutional-based encoder-decoder neural network that predicts the 3D rotation groups of the gripper at the image location of the objects, similar to LocNet but with a different output head. Afterwards, an analytical search algorithm will determine the gripper's opening width and the gripper’s distance from the grasp point. Real-world experiments are conducted with a UR10 robot arm, an Intel Realsense camera and a Robotiq two-finger gripper on single-object scenes and cluttered scenes, which show satisfactory success rates. Bachelor of Engineering (Electrical and Electronic Engineering) 2022-05-27T01:45:21Z 2022-05-27T01:45:21Z 2022 Final Year Project (FYP) Chu, Y. (2022). Intelligent robotic grasping and manipulation system with deep learning. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/158029 https://hdl.handle.net/10356/158029 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
spellingShingle Engineering::Electrical and electronic engineering
Chu, You-Rui
Intelligent robotic grasping and manipulation system with deep learning
description Random object grasping is a crucial problem in robotics which is yet to be solved. Typically, vision-based robotic grasping can be classified into two approaches, 2D planar grasp and 6-DoF (degree of freedom) grasp. In this project, the focus will be on the prediction of 6-DoF grasp poses based on RGB-D images. Most of the current approaches for 6-DoF grasp are generated from point clouds or unstable depth images, which may lead to undesirable results in some cases. The proposed method divides the 6-DoF grasp detection into three sub-stages. The first stage is the LocNet, a convolutional-based encoder-decoder neural network to predict the location of the objects in the image. Besides, ViewAngleNet is also a convolutional-based encoder-decoder neural network that predicts the 3D rotation groups of the gripper at the image location of the objects, similar to LocNet but with a different output head. Afterwards, an analytical search algorithm will determine the gripper's opening width and the gripper’s distance from the grasp point. Real-world experiments are conducted with a UR10 robot arm, an Intel Realsense camera and a Robotiq two-finger gripper on single-object scenes and cluttered scenes, which show satisfactory success rates.
author2 Lin Zhiping
author_facet Lin Zhiping
Chu, You-Rui
format Final Year Project
author Chu, You-Rui
author_sort Chu, You-Rui
title Intelligent robotic grasping and manipulation system with deep learning
title_short Intelligent robotic grasping and manipulation system with deep learning
title_full Intelligent robotic grasping and manipulation system with deep learning
title_fullStr Intelligent robotic grasping and manipulation system with deep learning
title_full_unstemmed Intelligent robotic grasping and manipulation system with deep learning
title_sort intelligent robotic grasping and manipulation system with deep learning
publisher Nanyang Technological University
publishDate 2022
url https://hdl.handle.net/10356/158029
_version_ 1772825280928284672