Neuron coverage-guided domain generalization

This paper focuses on the domain generalization task where domain knowledge is unavailable, and even worse, only samples from a single domain can be utilized during training. Our motivation originates from the recent progresses in deep neural network (DNN) testing, which has shown that maximizing ne...

Full description

Saved in:
Bibliographic Details
Main Authors: Tian, Chris Xing, Li, Haoliang, Xie, Xiaofei, Liu, Yang, Wang, Shiqi
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/10356/162633
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-162633
record_format dspace
spelling sg-ntu-dr.10356-1626332022-11-02T00:48:26Z Neuron coverage-guided domain generalization Tian, Chris Xing Li, Haoliang Xie, Xiaofei Liu, Yang Wang, Shiqi School of Computer Science and Engineering Engineering::Computer science and engineering Neuron Coverage Gradient Similarity This paper focuses on the domain generalization task where domain knowledge is unavailable, and even worse, only samples from a single domain can be utilized during training. Our motivation originates from the recent progresses in deep neural network (DNN) testing, which has shown that maximizing neuron coverage of DNN can help to explore possible defects of DNN (i.e.,misclassification). More specifically, by treating the DNN as a program and each neuron as a functional point of the code, during the network training we aim to improve the generalization capability by maximizing the neuron coverage of DNN with the gradient similarity regularization between the original and augmented samples. As such, the decision behavior of the DNN is optimized, avoiding the arbitrary neurons that are deleterious for the unseen samples, and leading to the trained DNN that can be better generalized to out-of-distribution samples. Extensive studies on various domain generalization tasks based on both single and multiple domain(s) setting demonstrate the effectiveness of our proposed approach compared with state-of-the-art baseline methods. We also analyze our method by conducting visualization based on network dissection. The results further provide useful evidence on the rationality and effectiveness of our approach. 2022-11-02T00:48:26Z 2022-11-02T00:48:26Z 2022 Journal Article Tian, C. X., Li, H., Xie, X., Liu, Y. & Wang, S. (2022). Neuron coverage-guided domain generalization. IEEE Transactions On Pattern Analysis and Machine Intelligence, 3157441-. https://dx.doi.org/10.1109/TPAMI.2022.3157441 0162-8828 https://hdl.handle.net/10356/162633 10.1109/TPAMI.2022.3157441 35259096 2-s2.0-85126281096 3157441 en IEEE Transactions on Pattern Analysis and Machine Intelligence © 2021 IEEE. All rights reserved.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Neuron Coverage
Gradient Similarity
spellingShingle Engineering::Computer science and engineering
Neuron Coverage
Gradient Similarity
Tian, Chris Xing
Li, Haoliang
Xie, Xiaofei
Liu, Yang
Wang, Shiqi
Neuron coverage-guided domain generalization
description This paper focuses on the domain generalization task where domain knowledge is unavailable, and even worse, only samples from a single domain can be utilized during training. Our motivation originates from the recent progresses in deep neural network (DNN) testing, which has shown that maximizing neuron coverage of DNN can help to explore possible defects of DNN (i.e.,misclassification). More specifically, by treating the DNN as a program and each neuron as a functional point of the code, during the network training we aim to improve the generalization capability by maximizing the neuron coverage of DNN with the gradient similarity regularization between the original and augmented samples. As such, the decision behavior of the DNN is optimized, avoiding the arbitrary neurons that are deleterious for the unseen samples, and leading to the trained DNN that can be better generalized to out-of-distribution samples. Extensive studies on various domain generalization tasks based on both single and multiple domain(s) setting demonstrate the effectiveness of our proposed approach compared with state-of-the-art baseline methods. We also analyze our method by conducting visualization based on network dissection. The results further provide useful evidence on the rationality and effectiveness of our approach.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Tian, Chris Xing
Li, Haoliang
Xie, Xiaofei
Liu, Yang
Wang, Shiqi
format Article
author Tian, Chris Xing
Li, Haoliang
Xie, Xiaofei
Liu, Yang
Wang, Shiqi
author_sort Tian, Chris Xing
title Neuron coverage-guided domain generalization
title_short Neuron coverage-guided domain generalization
title_full Neuron coverage-guided domain generalization
title_fullStr Neuron coverage-guided domain generalization
title_full_unstemmed Neuron coverage-guided domain generalization
title_sort neuron coverage-guided domain generalization
publishDate 2022
url https://hdl.handle.net/10356/162633
_version_ 1749179217687347200