Integrating force-based manipulation primitives with deep visual servoing for robotic assembly

This paper explores the idea of combining Deep Learning-based Visual Servoing and dynamic sequences of force-based Manipulation Primitives for robotic assembly tasks. Most current peg-in-hole algorithms assume the initial peg pose is already aligned within a minute deviation range before a tight-cle...

全面介紹

Saved in:
書目詳細資料
主要作者: Lee, Yee Sien
其他作者: Pham Quang Cuong
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2022
主題:
在線閱讀:https://hdl.handle.net/10356/157880
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:This paper explores the idea of combining Deep Learning-based Visual Servoing and dynamic sequences of force-based Manipulation Primitives for robotic assembly tasks. Most current peg-in-hole algorithms assume the initial peg pose is already aligned within a minute deviation range before a tight-clearance insertion is attempted. With the integration of tactile and visual information, highly-accurate peg alignment before insertion can be achieved autonomously. In the alignment phase, the peg mounted on the end-effector can be aligned automatically from an initial pose with large displacement errors to an estimated insertion pose with errors lower than 1.5 mm in translation and 1.5° in rotation, all in one-shot Deep Learning-Based Visual Servoing estimation. If using solely Deep Learning-based Visual Servoing is not able to complete the peg-in-hole insertion, a dynamic sequence of Manipulation Primitives will then be automatically generated via Reinforcement Learning to fnish the last stage of insertion.