Multicomponent adversarial domain adaptation: a general framework

Domain adaptation (DA) aims to transfer knowledge from one source domain to another different but related target domain. The mainstream approach embeds adversarial learning into deep neural networks (DNNs) to either learn domain-invariant features to reduce the domain discrepancy or generate data to...

Full description

Saved in:
Bibliographic Details
Main Authors: Yi, Chang'an, Chen, Haotian, Xu, Yonghui, Chen, Huanhuan, Liu, Yong, Tan, Haishu, Yan, Yuguang, Yu, Han
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/170572
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-170572
record_format dspace
spelling sg-ntu-dr.10356-1705722023-09-19T07:19:23Z Multicomponent adversarial domain adaptation: a general framework Yi, Chang'an Chen, Haotian Xu, Yonghui Chen, Huanhuan Liu, Yong Tan, Haishu Yan, Yuguang Yu, Han School of Computer Science and Engineering Alibaba-NTU Singapore Joint Research Institute Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly (LILY) Engineering::Computer science and engineering Adversarial Training Bipartite Graph Domain adaptation (DA) aims to transfer knowledge from one source domain to another different but related target domain. The mainstream approach embeds adversarial learning into deep neural networks (DNNs) to either learn domain-invariant features to reduce the domain discrepancy or generate data to fill in the domain gap. However, these adversarial DA (ADA) approaches mainly consider the domain-level data distributions, while ignoring the differences among components contained in different domains. Therefore, components that are not related to the target domain are not filtered out. This can cause a negative transfer. In addition, it is difficult to make full use of the relevant components between the source and target domains to enhance DA. To address these limitations, we propose a general two-stage framework, named multicomponent ADA (MCADA). This framework trains the target model by first learning a domain-level model and then fine-tuning that model at the component-level. In particular, MCADA constructs a bipartite graph to find the most relevant component in the source domain for each component in the target domain. Since the nonrelevant components are filtered out for each target component, fine-tuning the domain-level model can enhance positive transfer. Extensive experiments on several real-world datasets demonstrate that MCADA has significant advantages over state-of-the-art methods. Agency for Science, Technology and Research (A*STAR) Nanyang Technological University National Research Foundation (NRF) This work was supported in part by the National Key Research and Development Program of China under Grant 2021YFF0900800; in part by the China-Singapore International Joint Research Project under Grant 206-A021002; in part by the Shandong Provincial Natural Science Foundation under Grant ZR2022QF018; in part by the Shandong Provincial Excellent Youth Science Fund Project (Overseas) under Grant 2023HWYQ-039; in part by the Foundation Research Fund of Shandong University; in part by the National Natural Science Foundation of China under Grant 62271148, Grant 62206061, and Grant 61972091; in part by the Natural Science Foundation of Guangdong Province of China under Grant 2022A1515010101 and Grant 2022A1515011544; in part by the Guangzhou Basic and Applied Basic Research Foundation under Grant SL2022A04J01182; in part by the Key Research Project of Universities of Guangdong Province of China under Grant 2019KZDXM007; in part by the Special Fund for Science and Technology of Guangdong Province under Grant 2021S0053; in part by the National Research Foundation Singapore and DSO National Laboratories under the AI Singapore Program under AISG Award AISG2-RP-2020-019; in part by the Nanyang Technological University, Nanyang Assistant Professorship (NAP); and in part by the RIE 2020 Advanced Manufacturing and Engineering (AME) Programmatic Fund, Singapore, under Grant A20G8b0102. 2023-09-19T07:19:23Z 2023-09-19T07:19:23Z 2023 Journal Article Yi, C., Chen, H., Xu, Y., Chen, H., Liu, Y., Tan, H., Yan, Y. & Yu, H. (2023). Multicomponent adversarial domain adaptation: a general framework. IEEE Transactions On Neural Networks and Learning Systems. https://dx.doi.org/10.1109/TNNLS.2023.3270359 2162-237X https://hdl.handle.net/10356/170572 10.1109/TNNLS.2023.3270359 37224350 2-s2.0-85161046417 en AISG2-RP-2020-019 A20G8b0102 IEEE Transactions on Neural Networks and Learning Systems © 2023 IEEE. All rights reserved.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Adversarial Training
Bipartite Graph
spellingShingle Engineering::Computer science and engineering
Adversarial Training
Bipartite Graph
Yi, Chang'an
Chen, Haotian
Xu, Yonghui
Chen, Huanhuan
Liu, Yong
Tan, Haishu
Yan, Yuguang
Yu, Han
Multicomponent adversarial domain adaptation: a general framework
description Domain adaptation (DA) aims to transfer knowledge from one source domain to another different but related target domain. The mainstream approach embeds adversarial learning into deep neural networks (DNNs) to either learn domain-invariant features to reduce the domain discrepancy or generate data to fill in the domain gap. However, these adversarial DA (ADA) approaches mainly consider the domain-level data distributions, while ignoring the differences among components contained in different domains. Therefore, components that are not related to the target domain are not filtered out. This can cause a negative transfer. In addition, it is difficult to make full use of the relevant components between the source and target domains to enhance DA. To address these limitations, we propose a general two-stage framework, named multicomponent ADA (MCADA). This framework trains the target model by first learning a domain-level model and then fine-tuning that model at the component-level. In particular, MCADA constructs a bipartite graph to find the most relevant component in the source domain for each component in the target domain. Since the nonrelevant components are filtered out for each target component, fine-tuning the domain-level model can enhance positive transfer. Extensive experiments on several real-world datasets demonstrate that MCADA has significant advantages over state-of-the-art methods.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Yi, Chang'an
Chen, Haotian
Xu, Yonghui
Chen, Huanhuan
Liu, Yong
Tan, Haishu
Yan, Yuguang
Yu, Han
format Article
author Yi, Chang'an
Chen, Haotian
Xu, Yonghui
Chen, Huanhuan
Liu, Yong
Tan, Haishu
Yan, Yuguang
Yu, Han
author_sort Yi, Chang'an
title Multicomponent adversarial domain adaptation: a general framework
title_short Multicomponent adversarial domain adaptation: a general framework
title_full Multicomponent adversarial domain adaptation: a general framework
title_fullStr Multicomponent adversarial domain adaptation: a general framework
title_full_unstemmed Multicomponent adversarial domain adaptation: a general framework
title_sort multicomponent adversarial domain adaptation: a general framework
publishDate 2023
url https://hdl.handle.net/10356/170572
_version_ 1779156305326374912