IMAGE-BASED VISUAL SERVOING ON PUMA260 MANIPULATOR WITH YOLOV7 AS FEATURE EXTRACTOR
Manipulator robots are widely used in industry to handle repetitive tasks such as moving parts. Visual Servoing was developed to compensate for the weaknesses of conventional robot control which requires precise knowledge of the object and robot model. Recent research in deep learning-based objec...
Saved in:
Main Author: | |
---|---|
Format: | Theses |
Language: | Indonesia |
Online Access: | https://digilib.itb.ac.id/gdl/view/80164 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Institut Teknologi Bandung |
Language: | Indonesia |
Summary: | Manipulator robots are widely used in industry to handle repetitive tasks such as
moving parts. Visual Servoing was developed to compensate for the weaknesses of
conventional robot control which requires precise knowledge of the object and
robot model. Recent research in deep learning-based object detection algorithms
introduced YOLOv7 which is high accuracy and capable of detecting objects in real
time. In this research, YOLOv7 is implemented to control the movement of a robot
manipulator. The YOLOv7 model is trained to detect a purple ball with 2100
training data images. The Image-Based Visual Servoing (IBVS) scheme is
developed by deriving the control law, Robot Jacobian matrix, and Image Jacobian
matrix. The depth of the feature points in the Jacobian Image matrix is estimated
by performing quadratic regression on the width of the detected object. This
algorithm is applied on PUMA260 manipulator robot with 6 degrees of freedom.
The robot manipulator is expected to track the purple ball object by minimizing the
normalized error. The test results show that the designed controller is stable and
the error is minimized in 65 iterations with a proportional constant of 0.07. |
---|