Deep learning for object detection and image segmentation
In recent years, the fast-moving consumer goods (FMCG) industry has shown significant interest in robot warehouse automation technology due to the increasing demand of e-commerce, fast and reliable delivery. However, it is not a simple task to pack a large variety of products according to mass custo...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/140433 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | In recent years, the fast-moving consumer goods (FMCG) industry has shown significant interest in robot warehouse automation technology due to the increasing demand of e-commerce, fast and reliable delivery. However, it is not a simple task to pack a large variety of products according to mass customized orders. Therefore, a fully autonomous warehouse pick-and-place system is able to complete the job with ease by employing a robust vision system that reliably locates and recognizes objects from cluttered environment, different objects and self-occlusions. The aim of this project is to develop an automated solution to allow the robot to pick up the indicated object accurately from a clustered bin in bin-picking. The robot system setup consists of a UR5 robotic arm attached with a gripper and a vision camera. In the proposed approach, we segmented and labelled multiple perspective of a view using a convolutional neural network. A large amount of training data is required to train a deep neural network for segmentation. Therefore, the proposed solution used a self-supervised method to train a large dataset and at a faster speed. The Mask-R-CNN approach was also implemented to identify each item and their individual masks to achieve a higher accuracy for object detection and image segmentation. |
---|