Deep transfer metric learning
Conventional metric learning methods usually assume that the training and test samples are captured in similar scenarios so that their distributions are assumed to be the same. This assumption doesn't hold in many real visual recognition applications, especially when samples are captured across...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2016
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/80552 http://hdl.handle.net/10220/40552 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-80552 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-805522020-03-07T13:24:44Z Deep transfer metric learning Hu, Junlin Lu, Jiwen Tan, Yap Peng School of Electrical and Electronic Engineering 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Face Face recognition Learning systems Machine learning Training Visualization Measurement Conventional metric learning methods usually assume that the training and test samples are captured in similar scenarios so that their distributions are assumed to be the same. This assumption doesn't hold in many real visual recognition applications, especially when samples are captured across different datasets. In this paper, we propose a new deep transfer metric learning (DTML) method to learn a set of hierarchical nonlinear transformations for cross-domain visual recognition by transferring discriminative knowledge from the labeled source domain to the unlabeled target domain. Specifically, our DTML learns a deep metric network by maximizing the inter-class variations and minimizing the intra-class variations, and minimizing the distribution divergence between the source domain and the target domain at the top layer of the network. To better exploit the discriminative information from the source domain, we further develop a deeply supervised transfer metric learning (DSTML) method by including an additional objective on DTML where the output of both the hidden layers and the top layer are optimized jointly. Experimental results on cross-dataset face verification and person re-identification validate the effectiveness of the proposed methods. Accepted version 2016-05-20T03:19:03Z 2019-12-06T13:52:04Z 2016-05-20T03:19:03Z 2019-12-06T13:52:04Z 2015 Conference Paper Hu, J., Lu, J., & Tan, Y.-P. (2015). Deep transfer metric learning. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 325-333. https://hdl.handle.net/10356/80552 http://hdl.handle.net/10220/40552 10.1109/CVPR.2015.7298629 en © 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: [http://dx.doi.org/10.1109/CVPR.2015.7298629]. 9 p. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
country |
Singapore |
collection |
DR-NTU |
language |
English |
topic |
Face Face recognition Learning systems Machine learning Training Visualization Measurement |
spellingShingle |
Face Face recognition Learning systems Machine learning Training Visualization Measurement Hu, Junlin Lu, Jiwen Tan, Yap Peng Deep transfer metric learning |
description |
Conventional metric learning methods usually assume that the training and test samples are captured in similar scenarios so that their distributions are assumed to be the same. This assumption doesn't hold in many real visual recognition applications, especially when samples are captured across different datasets. In this paper, we propose a new deep transfer metric learning (DTML) method to learn a set of hierarchical nonlinear transformations for cross-domain visual recognition by transferring discriminative knowledge from the labeled source domain to the unlabeled target domain. Specifically, our DTML learns a deep metric network by maximizing the inter-class variations and minimizing the intra-class variations, and minimizing the distribution divergence between the source domain and the target domain at the top layer of the network. To better exploit the discriminative information from the source domain, we further develop a deeply supervised transfer metric learning (DSTML) method by including an additional objective on DTML where the output of both the hidden layers and the top layer are optimized jointly. Experimental results on cross-dataset face verification and person re-identification validate the effectiveness of the proposed methods. |
author2 |
School of Electrical and Electronic Engineering |
author_facet |
School of Electrical and Electronic Engineering Hu, Junlin Lu, Jiwen Tan, Yap Peng |
format |
Conference or Workshop Item |
author |
Hu, Junlin Lu, Jiwen Tan, Yap Peng |
author_sort |
Hu, Junlin |
title |
Deep transfer metric learning |
title_short |
Deep transfer metric learning |
title_full |
Deep transfer metric learning |
title_fullStr |
Deep transfer metric learning |
title_full_unstemmed |
Deep transfer metric learning |
title_sort |
deep transfer metric learning |
publishDate |
2016 |
url |
https://hdl.handle.net/10356/80552 http://hdl.handle.net/10220/40552 |
_version_ |
1681043340274958336 |