Intelligent robotic grasping and manipulation system with deep learning

Random object grasping is a crucial problem in robotics which is yet to be solved. Typically, vision-based robotic grasping can be classified into two approaches, 2D planar grasp and 6-DoF (degree of freedom) grasp. In this project, the focus will be on the prediction of 6-DoF grasp poses based o...

Full description

Saved in:
Bibliographic Details
Main Author: Chu, You-Rui
Other Authors: Lin Zhiping
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/158029
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Random object grasping is a crucial problem in robotics which is yet to be solved. Typically, vision-based robotic grasping can be classified into two approaches, 2D planar grasp and 6-DoF (degree of freedom) grasp. In this project, the focus will be on the prediction of 6-DoF grasp poses based on RGB-D images. Most of the current approaches for 6-DoF grasp are generated from point clouds or unstable depth images, which may lead to undesirable results in some cases. The proposed method divides the 6-DoF grasp detection into three sub-stages. The first stage is the LocNet, a convolutional-based encoder-decoder neural network to predict the location of the objects in the image. Besides, ViewAngleNet is also a convolutional-based encoder-decoder neural network that predicts the 3D rotation groups of the gripper at the image location of the objects, similar to LocNet but with a different output head. Afterwards, an analytical search algorithm will determine the gripper's opening width and the gripper’s distance from the grasp point. Real-world experiments are conducted with a UR10 robot arm, an Intel Realsense camera and a Robotiq two-finger gripper on single-object scenes and cluttered scenes, which show satisfactory success rates.