Make the U in UDA matter: Invariant consistency learning for unsupervised domain adaptation

Domain Adaptation (DA) is always challenged by the spurious correlation between domain-invariant features (e.g., class identity) and domain-specific features (e.g., environment) that do not generalize to the target domain. Unfortunately, even enriched with additional unsupervised target domains, exi...

Full description

Saved in:
Bibliographic Details
Main Authors: YUE, Zhongqi, SUN, Qianru, ZHANG, Hanwang
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8474
https://ink.library.smu.edu.sg/context/sis_research/article/9477/viewcontent/Make_the_U_in_UDA_Matter__Invariant_Consistency_Learning_for_Unsupervised_Domain_Adaptation__1_.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9477
record_format dspace
spelling sg-smu-ink.sis_research-94772024-01-04T09:25:47Z Make the U in UDA matter: Invariant consistency learning for unsupervised domain adaptation YUE, Zhongqi SUN, Qianru ZHANG, Hanwang Domain Adaptation (DA) is always challenged by the spurious correlation between domain-invariant features (e.g., class identity) and domain-specific features (e.g., environment) that do not generalize to the target domain. Unfortunately, even enriched with additional unsupervised target domains, existing Unsupervised DA (UDA) methods still suffer from it. This is because the source domain supervision only considers the target domain samples as auxiliary data (e.g., by pseudo-labeling), yet the inherent distribution in the target domain—where the valuable de-correlation clues hide—is disregarded. We propose to make the U in UDA matter by giving equal status to the two domains. Specifically, we learn an invariant classifier whose prediction is simultaneously consistent with the labels in the source domain and clusters in the target domain, hence the spurious correlation inconsistent in the target domain is removed. We dub our approach “Invariant CONsistency learning” (ICON). Extensive experiments show that ICON achieves state-of-the-art performance on the classic UDA benchmarks: OFFICE-HOME and VISDA-2017, and outperforms all the conventional methods on the challenging WILDS2.0 benchmark. 2023-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8474 https://ink.library.smu.edu.sg/context/sis_research/article/9477/viewcontent/Make_the_U_in_UDA_Matter__Invariant_Consistency_Learning_for_Unsupervised_Domain_Adaptation__1_.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Databases and Information Systems
spellingShingle Databases and Information Systems
YUE, Zhongqi
SUN, Qianru
ZHANG, Hanwang
Make the U in UDA matter: Invariant consistency learning for unsupervised domain adaptation
description Domain Adaptation (DA) is always challenged by the spurious correlation between domain-invariant features (e.g., class identity) and domain-specific features (e.g., environment) that do not generalize to the target domain. Unfortunately, even enriched with additional unsupervised target domains, existing Unsupervised DA (UDA) methods still suffer from it. This is because the source domain supervision only considers the target domain samples as auxiliary data (e.g., by pseudo-labeling), yet the inherent distribution in the target domain—where the valuable de-correlation clues hide—is disregarded. We propose to make the U in UDA matter by giving equal status to the two domains. Specifically, we learn an invariant classifier whose prediction is simultaneously consistent with the labels in the source domain and clusters in the target domain, hence the spurious correlation inconsistent in the target domain is removed. We dub our approach “Invariant CONsistency learning” (ICON). Extensive experiments show that ICON achieves state-of-the-art performance on the classic UDA benchmarks: OFFICE-HOME and VISDA-2017, and outperforms all the conventional methods on the challenging WILDS2.0 benchmark.
format text
author YUE, Zhongqi
SUN, Qianru
ZHANG, Hanwang
author_facet YUE, Zhongqi
SUN, Qianru
ZHANG, Hanwang
author_sort YUE, Zhongqi
title Make the U in UDA matter: Invariant consistency learning for unsupervised domain adaptation
title_short Make the U in UDA matter: Invariant consistency learning for unsupervised domain adaptation
title_full Make the U in UDA matter: Invariant consistency learning for unsupervised domain adaptation
title_fullStr Make the U in UDA matter: Invariant consistency learning for unsupervised domain adaptation
title_full_unstemmed Make the U in UDA matter: Invariant consistency learning for unsupervised domain adaptation
title_sort make the u in uda matter: invariant consistency learning for unsupervised domain adaptation
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8474
https://ink.library.smu.edu.sg/context/sis_research/article/9477/viewcontent/Make_the_U_in_UDA_Matter__Invariant_Consistency_Learning_for_Unsupervised_Domain_Adaptation__1_.pdf
_version_ 1787590776184635392