An empirical study on robustness of DNNs with out-of-distribution awareness

The state-of-the-art deep neural network (DNN) achieves impressive performance on the input that is similar to training data. However, it fails to make reasonable decisions on the input that is quite different from training data, i.e., out-of-distribution (OOD) examples. Although many techniques hav...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHOU, Lingjun, YU, Bing, BEREND, David, XIE, Xiaofei, LI, Xiaohong, ZHAO, Jianjun, LIU, Xusheng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2020
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7095
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8098
record_format dspace
spelling sg-smu-ink.sis_research-80982022-04-07T06:06:03Z An empirical study on robustness of DNNs with out-of-distribution awareness ZHOU, Lingjun YU, Bing BEREND, David XIE, Xiaofei LI, Xiaohong ZHAO, Jianjun LIU, Xusheng The state-of-the-art deep neural network (DNN) achieves impressive performance on the input that is similar to training data. However, it fails to make reasonable decisions on the input that is quite different from training data, i.e., out-of-distribution (OOD) examples. Although many techniques have been proposed to detect OOD examples in recent years, it is still a lack of a systematic study about the effectiveness and robustness of different techniques as well as the performance of OOD-aware DNN models. In this paper, we conduct a comprehensive study to unveil the mystery of current OOD detection techniques, and investigate the differences between OOD-unaware/-aware DNNs in model performance, robustness, and uncertainty. We first compare the effectiveness of existing detection techniques and identify the best one. Then, evasion attacks are performed to evaluate the robustness of techniques. Furthermore, we compare the accuracy and robustness between OOD-unaware/-aware DNNs. At last, we study the uncertainty of different models on various kinds of data. Empirical results show OOD-aware detection modules have better performance and are more robust against random noises and evasion attacks. OOD-awareness seldom degrades the accuracy of DNN models in training/test datasets. In contrast, it makes the DNN model more robust against adversarial attacks and noisy inputs. Our study calls for attention to the development of OOD-aware DNN models and the necessity to take data distribution into account when robust and reliable DNN models are desired. 2020-12-04T08:00:00Z text https://ink.library.smu.edu.sg/sis_research/7095 info:doi/10.1109/APSEC51365.2020.00035 Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University OS and Networks Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic OS and Networks
Software Engineering
spellingShingle OS and Networks
Software Engineering
ZHOU, Lingjun
YU, Bing
BEREND, David
XIE, Xiaofei
LI, Xiaohong
ZHAO, Jianjun
LIU, Xusheng
An empirical study on robustness of DNNs with out-of-distribution awareness
description The state-of-the-art deep neural network (DNN) achieves impressive performance on the input that is similar to training data. However, it fails to make reasonable decisions on the input that is quite different from training data, i.e., out-of-distribution (OOD) examples. Although many techniques have been proposed to detect OOD examples in recent years, it is still a lack of a systematic study about the effectiveness and robustness of different techniques as well as the performance of OOD-aware DNN models. In this paper, we conduct a comprehensive study to unveil the mystery of current OOD detection techniques, and investigate the differences between OOD-unaware/-aware DNNs in model performance, robustness, and uncertainty. We first compare the effectiveness of existing detection techniques and identify the best one. Then, evasion attacks are performed to evaluate the robustness of techniques. Furthermore, we compare the accuracy and robustness between OOD-unaware/-aware DNNs. At last, we study the uncertainty of different models on various kinds of data. Empirical results show OOD-aware detection modules have better performance and are more robust against random noises and evasion attacks. OOD-awareness seldom degrades the accuracy of DNN models in training/test datasets. In contrast, it makes the DNN model more robust against adversarial attacks and noisy inputs. Our study calls for attention to the development of OOD-aware DNN models and the necessity to take data distribution into account when robust and reliable DNN models are desired.
format text
author ZHOU, Lingjun
YU, Bing
BEREND, David
XIE, Xiaofei
LI, Xiaohong
ZHAO, Jianjun
LIU, Xusheng
author_facet ZHOU, Lingjun
YU, Bing
BEREND, David
XIE, Xiaofei
LI, Xiaohong
ZHAO, Jianjun
LIU, Xusheng
author_sort ZHOU, Lingjun
title An empirical study on robustness of DNNs with out-of-distribution awareness
title_short An empirical study on robustness of DNNs with out-of-distribution awareness
title_full An empirical study on robustness of DNNs with out-of-distribution awareness
title_fullStr An empirical study on robustness of DNNs with out-of-distribution awareness
title_full_unstemmed An empirical study on robustness of DNNs with out-of-distribution awareness
title_sort empirical study on robustness of dnns with out-of-distribution awareness
publisher Institutional Knowledge at Singapore Management University
publishDate 2020
url https://ink.library.smu.edu.sg/sis_research/7095
_version_ 1770576211259424768