Semi-supervised domain generalization with stochastic styleMatch

Ideally, visual learning algorithms should be generalizable, for dealing with any unseen domain shift when deployed in a new target environment; and data-efficient, for reducing development costs by using as little labels as possible. To this end, we study semi-supervised domain generalization (SSDG...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhou, Kaiyang, Loy, Chen Change, Liu, Ziwei
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/170127
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-170127
record_format dspace
spelling sg-ntu-dr.10356-1701272023-08-29T02:29:33Z Semi-supervised domain generalization with stochastic styleMatch Zhou, Kaiyang Loy, Chen Change Liu, Ziwei School of Computer Science and Engineering S-Lab for Advanced Intelligence Engineering::Computer science and engineering Semi-Supervised Domain Generalization Image Recognition Ideally, visual learning algorithms should be generalizable, for dealing with any unseen domain shift when deployed in a new target environment; and data-efficient, for reducing development costs by using as little labels as possible. To this end, we study semi-supervised domain generalization (SSDG), which aims to learn a domain-generalizable model using multi-source, partially-labeled training data. We design two benchmarks that cover state-of-the-art methods developed in two related fields, i.e., domain generalization (DG) and semi-supervised learning (SSL). We find that the DG methods, which by design are unable to handle unlabeled data, perform poorly with limited labels in SSDG; the SSL methods, especially FixMatch, obtain much better results but are still far away from the basic vanilla model trained using full labels. We propose StyleMatch, a simple approach that extends FixMatch with a couple of new ingredients tailored for SSDG: (1) stochastic modeling for reducing overfitting in scarce labels, and (2) multi-view consistency learning for enhancing domain generalization. Despite the concise designs, StyleMatch achieves significant improvements in SSDG. We hope our approach and the comprehensive benchmarks can pave the way for future research on generalizable and data-efficient learning systems. The source code is released at https://github.com/KaiyangZhou/ssdg-benchmark . Ministry of Education (MOE) Nanyang Technological University This study is supported by the Ministry of Education, Singapore, under its MOE AcRF Tier 2 (MOE-T2EP20221- 0012), NTU NAP, and under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). 2023-08-29T02:29:33Z 2023-08-29T02:29:33Z 2023 Journal Article Zhou, K., Loy, C. C. & Liu, Z. (2023). Semi-supervised domain generalization with stochastic styleMatch. International Journal of Computer Vision, 131(9), 2377-2387. https://dx.doi.org/10.1007/s11263-023-01821-x 0920-5691 https://hdl.handle.net/10356/170127 10.1007/s11263-023-01821-x 2-s2.0-85160959010 9 131 2377 2387 en MOE-T2EP20221- 0012 NTU NAP IAF-ICP International Journal of Computer Vision © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Semi-Supervised Domain Generalization
Image Recognition
spellingShingle Engineering::Computer science and engineering
Semi-Supervised Domain Generalization
Image Recognition
Zhou, Kaiyang
Loy, Chen Change
Liu, Ziwei
Semi-supervised domain generalization with stochastic styleMatch
description Ideally, visual learning algorithms should be generalizable, for dealing with any unseen domain shift when deployed in a new target environment; and data-efficient, for reducing development costs by using as little labels as possible. To this end, we study semi-supervised domain generalization (SSDG), which aims to learn a domain-generalizable model using multi-source, partially-labeled training data. We design two benchmarks that cover state-of-the-art methods developed in two related fields, i.e., domain generalization (DG) and semi-supervised learning (SSL). We find that the DG methods, which by design are unable to handle unlabeled data, perform poorly with limited labels in SSDG; the SSL methods, especially FixMatch, obtain much better results but are still far away from the basic vanilla model trained using full labels. We propose StyleMatch, a simple approach that extends FixMatch with a couple of new ingredients tailored for SSDG: (1) stochastic modeling for reducing overfitting in scarce labels, and (2) multi-view consistency learning for enhancing domain generalization. Despite the concise designs, StyleMatch achieves significant improvements in SSDG. We hope our approach and the comprehensive benchmarks can pave the way for future research on generalizable and data-efficient learning systems. The source code is released at https://github.com/KaiyangZhou/ssdg-benchmark .
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Zhou, Kaiyang
Loy, Chen Change
Liu, Ziwei
format Article
author Zhou, Kaiyang
Loy, Chen Change
Liu, Ziwei
author_sort Zhou, Kaiyang
title Semi-supervised domain generalization with stochastic styleMatch
title_short Semi-supervised domain generalization with stochastic styleMatch
title_full Semi-supervised domain generalization with stochastic styleMatch
title_fullStr Semi-supervised domain generalization with stochastic styleMatch
title_full_unstemmed Semi-supervised domain generalization with stochastic styleMatch
title_sort semi-supervised domain generalization with stochastic stylematch
publishDate 2023
url https://hdl.handle.net/10356/170127
_version_ 1779156610686386176