Feature analysis of marginalized stacked denoising autoenconder for unsupervised domain adaptation
Marginalized stacked denoising autoencoder (mSDA), has recently emerged with demonstrated effectiveness in domain adaptation. In this paper, we investigate the rationale for why mSDA benefits domain adaptation tasks from the perspective of adaptive regularization. Our investigations focus on two typ...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/151969 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-151969 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1519692021-07-08T05:27:12Z Feature analysis of marginalized stacked denoising autoenconder for unsupervised domain adaptation Wei, Pengfei Ke, Yiping Goh, Chi Keong School of Computer Science and Engineering Engineering::Computer science and engineering Deep Feature Learning and Feature Analysis Marginalized Denoising Autoencoder Marginalized stacked denoising autoencoder (mSDA), has recently emerged with demonstrated effectiveness in domain adaptation. In this paper, we investigate the rationale for why mSDA benefits domain adaptation tasks from the perspective of adaptive regularization. Our investigations focus on two types of feature corruption noise: Gaussian noise (mSDA g ) and Bernoulli dropout noise (mSDA bd ). Both theoretical and empirical results demonstrate that mSDA bd successfully boosts the adaptation performance but mSDA g fails to do so. We then propose a new mSDA with data-dependent multinomial dropout noise (mSDA md ) that overcomes the limitations of mSDA bd and further improves the adaptation performance. Our mSDA md is based on a more realistic assumption: different features are correlated and, thus, should be corrupted with different probabilities. Experimental results demonstrate the superiority of mSDA md to mSDA bd on the adaptation performance and the convergence speed. Finally, we propose a deep transferable feature coding (DTFC) framework for unsupervised domain adaptation. The motivation of DTFC is that mSDA fails to consider the distribution discrepancy across different domains in the feature learning process. We introduce a new element to mSDA: domain divergence minimization by maximum mean discrepancy. This element is essential for domain adaptation as it ensures the extracted deep features to have a small distribution discrepancy. The effectiveness of DTFC is verified by extensive experiments on three benchmark data sets for both Bernoulli dropout noise and multinomial dropout noise. Ministry of Education (MOE) National Research Foundation (NRF) This work was supported in part by the National Research Foundation Singapore through the Corp Lab@University Scheme and in part by the Ministry of Education of Singapore through AcRF Tier-1 under Grant RG135/14. 2021-07-08T05:27:12Z 2021-07-08T05:27:12Z 2018 Journal Article Wei, P., Ke, Y. & Goh, C. K. (2018). Feature analysis of marginalized stacked denoising autoenconder for unsupervised domain adaptation. IEEE Transactions On Neural Networks and Learning Systems, 30(5), 1321-1334. https://dx.doi.org/10.1109/TNNLS.2018.2868709 2162-2388 https://hdl.handle.net/10356/151969 10.1109/TNNLS.2018.2868709 30281483 2-s2.0-85054377418 5 30 1321 1334 en RG135/14 IEEE Transactions on Neural Networks and Learning Systems © 2018 IEEE. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering Deep Feature Learning and Feature Analysis Marginalized Denoising Autoencoder |
spellingShingle |
Engineering::Computer science and engineering Deep Feature Learning and Feature Analysis Marginalized Denoising Autoencoder Wei, Pengfei Ke, Yiping Goh, Chi Keong Feature analysis of marginalized stacked denoising autoenconder for unsupervised domain adaptation |
description |
Marginalized stacked denoising autoencoder (mSDA), has recently emerged with demonstrated effectiveness in domain adaptation. In this paper, we investigate the rationale for why mSDA benefits domain adaptation tasks from the perspective of adaptive regularization. Our investigations focus on two types of feature corruption noise: Gaussian noise (mSDA g ) and Bernoulli dropout noise (mSDA bd ). Both theoretical and empirical results demonstrate that mSDA bd successfully boosts the adaptation performance but mSDA g fails to do so. We then propose a new mSDA with data-dependent multinomial dropout noise (mSDA md ) that overcomes the limitations of mSDA bd and further improves the adaptation performance. Our mSDA md is based on a more realistic assumption: different features are correlated and, thus, should be corrupted with different probabilities. Experimental results demonstrate the superiority of mSDA md to mSDA bd on the adaptation performance and the convergence speed. Finally, we propose a deep transferable feature coding (DTFC) framework for unsupervised domain adaptation. The motivation of DTFC is that mSDA fails to consider the distribution discrepancy across different domains in the feature learning process. We introduce a new element to mSDA: domain divergence minimization by maximum mean discrepancy. This element is essential for domain adaptation as it ensures the extracted deep features to have a small distribution discrepancy. The effectiveness of DTFC is verified by extensive experiments on three benchmark data sets for both Bernoulli dropout noise and multinomial dropout noise. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Wei, Pengfei Ke, Yiping Goh, Chi Keong |
format |
Article |
author |
Wei, Pengfei Ke, Yiping Goh, Chi Keong |
author_sort |
Wei, Pengfei |
title |
Feature analysis of marginalized stacked denoising autoenconder for unsupervised domain adaptation |
title_short |
Feature analysis of marginalized stacked denoising autoenconder for unsupervised domain adaptation |
title_full |
Feature analysis of marginalized stacked denoising autoenconder for unsupervised domain adaptation |
title_fullStr |
Feature analysis of marginalized stacked denoising autoenconder for unsupervised domain adaptation |
title_full_unstemmed |
Feature analysis of marginalized stacked denoising autoenconder for unsupervised domain adaptation |
title_sort |
feature analysis of marginalized stacked denoising autoenconder for unsupervised domain adaptation |
publishDate |
2021 |
url |
https://hdl.handle.net/10356/151969 |
_version_ |
1705151348408844288 |