Deep self-supervised representation learning for free-hand sketch
In this paper, we tackle for the first time, the problem of self-supervised representation learning for free-hand sketches. This importantly addresses a common problem faced by the sketch community - that annotated supervisory data are difficult to obtain. This problem is very challenging in which s...
Saved in:
Main Authors: | , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/160523 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-160523 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1605232022-07-26T05:09:55Z Deep self-supervised representation learning for free-hand sketch Xu, Peng Song, Zeyu Yin, Qiyue Song, Yi-Zhe Wang, Liang School of Computer Science and Engineering Engineering::Computer science and engineering Feature Extraction Task Analysis In this paper, we tackle for the first time, the problem of self-supervised representation learning for free-hand sketches. This importantly addresses a common problem faced by the sketch community - that annotated supervisory data are difficult to obtain. This problem is very challenging in which sketches are highly abstract and subject to different drawing styles, making existing solutions tailored for photos unsuitable. Key for the success of our self-supervised learning paradigm lies with our sketch-specific designs: (i) we propose a set of pretext tasks specifically designed for sketches that mimic different drawing styles, and (ii) we further exploit the use of the textual convolution network (TCN) together with the convolutional neural network (CNN) in a dual-branch architecture for sketch feature learning, as means to accommodate the sequential stroke nature of sketches. We demonstrate the superiority of our sketch-specific designs through two sketch-related applications (retrieval and recognition) on a million-scale sketch dataset, and show that the proposed approach outperforms the state-of-the-art unsupervised representation learning methods, and significantly narrows the performance gap between with supervised representation learning. This work was supported in part by the BUPT Excellent Ph.D. Student Foundation CX2017307 and in part by the BUPT-SICE Excellent Graduate Student Innovation Foundation. 2022-07-26T05:09:55Z 2022-07-26T05:09:55Z 2020 Journal Article Xu, P., Song, Z., Yin, Q., Song, Y. & Wang, L. (2020). Deep self-supervised representation learning for free-hand sketch. IEEE Transactions On Circuits and Systems for Video Technology, 31(4), 1503-1513. https://dx.doi.org/10.1109/TCSVT.2020.3003048 1051-8215 https://hdl.handle.net/10356/160523 10.1109/TCSVT.2020.3003048 2-s2.0-85103988404 4 31 1503 1513 en IEEE Transactions on Circuits and Systems for Video Technology © 2020 IEEE. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering Feature Extraction Task Analysis |
spellingShingle |
Engineering::Computer science and engineering Feature Extraction Task Analysis Xu, Peng Song, Zeyu Yin, Qiyue Song, Yi-Zhe Wang, Liang Deep self-supervised representation learning for free-hand sketch |
description |
In this paper, we tackle for the first time, the problem of self-supervised representation learning for free-hand sketches. This importantly addresses a common problem faced by the sketch community - that annotated supervisory data are difficult to obtain. This problem is very challenging in which sketches are highly abstract and subject to different drawing styles, making existing solutions tailored for photos unsuitable. Key for the success of our self-supervised learning paradigm lies with our sketch-specific designs: (i) we propose a set of pretext tasks specifically designed for sketches that mimic different drawing styles, and (ii) we further exploit the use of the textual convolution network (TCN) together with the convolutional neural network (CNN) in a dual-branch architecture for sketch feature learning, as means to accommodate the sequential stroke nature of sketches. We demonstrate the superiority of our sketch-specific designs through two sketch-related applications (retrieval and recognition) on a million-scale sketch dataset, and show that the proposed approach outperforms the state-of-the-art unsupervised representation learning methods, and significantly narrows the performance gap between with supervised representation learning. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Xu, Peng Song, Zeyu Yin, Qiyue Song, Yi-Zhe Wang, Liang |
format |
Article |
author |
Xu, Peng Song, Zeyu Yin, Qiyue Song, Yi-Zhe Wang, Liang |
author_sort |
Xu, Peng |
title |
Deep self-supervised representation learning for free-hand sketch |
title_short |
Deep self-supervised representation learning for free-hand sketch |
title_full |
Deep self-supervised representation learning for free-hand sketch |
title_fullStr |
Deep self-supervised representation learning for free-hand sketch |
title_full_unstemmed |
Deep self-supervised representation learning for free-hand sketch |
title_sort |
deep self-supervised representation learning for free-hand sketch |
publishDate |
2022 |
url |
https://hdl.handle.net/10356/160523 |
_version_ |
1739837470358372352 |