DFBVS: deep feature-based visual servo
Classical Visual Servoing (VS) relies on handcrafted visual features, which limit their generalizability. Recently, a number of approaches, some based on Deep Neural Networks, have been proposed to overcome this limitation by comparing directly the entire target and current camera images. However, b...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/171751 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-171751 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1717512023-11-07T01:57:00Z DFBVS: deep feature-based visual servo Adrian, Nicholas Do, Van Thach Pham, Quang-Cuong School of Mechanical and Aerospace Engineering IEEE 18th International Conference on Automation Science and Engineering (CASE 2022) HP-NTU Digital Manufacturing Corporate Lab Engineering::Mechanical engineering Deep Learning Visualization Classical Visual Servoing (VS) relies on handcrafted visual features, which limit their generalizability. Recently, a number of approaches, some based on Deep Neural Networks, have been proposed to overcome this limitation by comparing directly the entire target and current camera images. However, by getting rid of the visual features altogether, those approaches require the target and current images to be essentially similar, which precludes the generalization to unknown, cluttered, scenes. Here we propose to perform VS based on visual features as in classical VS approaches but, contrary to the latter, we leverage recent breakthroughs in Deep Learning to automatically extract and match the visual features. By doing so, our approach enjoys the advantages from both worlds: (i) because our approach is based on visual features, it is able to steer the robot towards the object of interest even in presence of significant distraction in the background; (ii) because the features are automatically extracted and matched, our approach can easily and automatically generalize to unseen objects and scenes. In addition, we propose to use a render engine to synthesize the target image, which offers a further level of generalization. We demonstrate these advantages in a robotic grasping task, where the robot is able to steer, with high accuracy, towards the object to grasp, based simply on an image of the object rendered from the camera view corresponding to the desired robot grasping pose. This study is supported under the RIE2020 Industry Alignment Fund Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner, HP Inc., through the HP-NTU Digital Manufacturing Corporate Lab. 2023-11-07T01:57:00Z 2023-11-07T01:57:00Z 2022 Conference Paper Adrian, N., Do, V. T. & Pham, Q. (2022). DFBVS: deep feature-based visual servo. IEEE 18th International Conference on Automation Science and Engineering (CASE 2022), 1783-1789. https://dx.doi.org/10.1109/CASE49997.2022.9926560 9781665490429 https://hdl.handle.net/10356/171751 10.1109/CASE49997.2022.9926560 2-s2.0-85141708941 1783 1789 en © 2022 IEEE. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Mechanical engineering Deep Learning Visualization |
spellingShingle |
Engineering::Mechanical engineering Deep Learning Visualization Adrian, Nicholas Do, Van Thach Pham, Quang-Cuong DFBVS: deep feature-based visual servo |
description |
Classical Visual Servoing (VS) relies on handcrafted visual features, which limit their generalizability. Recently, a number of approaches, some based on Deep Neural Networks, have been proposed to overcome this limitation by comparing directly the entire target and current camera images. However, by getting rid of the visual features altogether, those approaches require the target and current images to be essentially similar, which precludes the generalization to unknown, cluttered, scenes. Here we propose to perform VS based on visual features as in classical VS approaches but, contrary to the latter, we leverage recent breakthroughs in Deep Learning to automatically extract and match the visual features. By doing so, our approach enjoys the advantages from both worlds: (i) because our approach is based on visual features, it is able to steer the robot towards the object of interest even in presence of significant distraction in the background; (ii) because the features are automatically extracted and matched, our approach can easily and automatically generalize to unseen objects and scenes. In addition, we propose to use a render engine to synthesize the target image, which offers a further level of generalization. We demonstrate these advantages in a robotic grasping task, where the robot is able to steer, with high accuracy, towards the object to grasp, based simply on an image of the object rendered from the camera view corresponding to the desired robot grasping pose. |
author2 |
School of Mechanical and Aerospace Engineering |
author_facet |
School of Mechanical and Aerospace Engineering Adrian, Nicholas Do, Van Thach Pham, Quang-Cuong |
format |
Conference or Workshop Item |
author |
Adrian, Nicholas Do, Van Thach Pham, Quang-Cuong |
author_sort |
Adrian, Nicholas |
title |
DFBVS: deep feature-based visual servo |
title_short |
DFBVS: deep feature-based visual servo |
title_full |
DFBVS: deep feature-based visual servo |
title_fullStr |
DFBVS: deep feature-based visual servo |
title_full_unstemmed |
DFBVS: deep feature-based visual servo |
title_sort |
dfbvs: deep feature-based visual servo |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/171751 |
_version_ |
1783955492999004160 |