Towards characterizing adversarial defects of deep learning software from the lens of uncertainty

Over the past decade, deep learning (DL) has been successfully applied to many industrial domain-specific tasks. However, the current state-of-the-art DL software still suffers from quality issues, which raises great concern especially in the context of safety- and security-critical scenarios. Adver...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHANG, Xiyue, XIE, Xiaofei, MA, Lei, DU, Xiaoning, HU, Qiang, LIU, Yang, ZHAO, Jianjun, SUN, Meng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2020
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7084
https://ink.library.smu.edu.sg/context/sis_research/article/8087/viewcontent/3377811.3380368.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8087
record_format dspace
spelling sg-smu-ink.sis_research-80872022-04-07T07:40:47Z Towards characterizing adversarial defects of deep learning software from the lens of uncertainty ZHANG, Xiyue XIE, Xiaofei MA, Lei DU, Xiaoning HU, Qiang LIU, Yang ZHAO, Jianjun SUN, Meng Over the past decade, deep learning (DL) has been successfully applied to many industrial domain-specific tasks. However, the current state-of-the-art DL software still suffers from quality issues, which raises great concern especially in the context of safety- and security-critical scenarios. Adversarial examples (AEs) represent a typical and important type of defects needed to be urgently addressed, on which a DL software makes incorrect decisions. Such defects occur through either intentional attack or physical-world noise perceived by input sensors, potentially hindering further industry deployment. The intrinsic uncertainty nature of deep learning decisions can be a fundamental reason for its incorrect behavior. Although some testing, adversarial attack and defense techniques have been recently proposed, it still lacks a systematic study to uncover the relationship between AEs and DL uncertainty.In this paper, we conduct a large-scale study towards bridging this gap. We first investigate the capability of multiple uncertainty metrics in differentiating benign examples (BEs) and AEs, which enables to characterize the uncertainty patterns of input data. Then, we identify and categorize the uncertainty patterns of BEs and AEs, and find that while BEs and AEs generated by existing methods do follow common uncertainty patterns, some other uncertainty patterns are largely missed. Based on this, we propose an automated testing technique to generate multiple types of uncommon AEs and BEs that are largely missed by existing techniques. Our further evaluation reveals that the uncommon data generated by our method is hard to be defended by the existing defense techniques with the average defense success rate reduced by 35%. Our results call for attention and necessity to generate more diverse data for evaluating quality assurance solutions of DL software. 2020-05-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7084 info:doi/10.1145/3377811.3380368 https://ink.library.smu.edu.sg/context/sis_research/article/8087/viewcontent/3377811.3380368.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Deep learning uncertainty adversarial attack software testing OS and Networks Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Deep learning
uncertainty
adversarial attack
software testing
OS and Networks
Software Engineering
spellingShingle Deep learning
uncertainty
adversarial attack
software testing
OS and Networks
Software Engineering
ZHANG, Xiyue
XIE, Xiaofei
MA, Lei
DU, Xiaoning
HU, Qiang
LIU, Yang
ZHAO, Jianjun
SUN, Meng
Towards characterizing adversarial defects of deep learning software from the lens of uncertainty
description Over the past decade, deep learning (DL) has been successfully applied to many industrial domain-specific tasks. However, the current state-of-the-art DL software still suffers from quality issues, which raises great concern especially in the context of safety- and security-critical scenarios. Adversarial examples (AEs) represent a typical and important type of defects needed to be urgently addressed, on which a DL software makes incorrect decisions. Such defects occur through either intentional attack or physical-world noise perceived by input sensors, potentially hindering further industry deployment. The intrinsic uncertainty nature of deep learning decisions can be a fundamental reason for its incorrect behavior. Although some testing, adversarial attack and defense techniques have been recently proposed, it still lacks a systematic study to uncover the relationship between AEs and DL uncertainty.In this paper, we conduct a large-scale study towards bridging this gap. We first investigate the capability of multiple uncertainty metrics in differentiating benign examples (BEs) and AEs, which enables to characterize the uncertainty patterns of input data. Then, we identify and categorize the uncertainty patterns of BEs and AEs, and find that while BEs and AEs generated by existing methods do follow common uncertainty patterns, some other uncertainty patterns are largely missed. Based on this, we propose an automated testing technique to generate multiple types of uncommon AEs and BEs that are largely missed by existing techniques. Our further evaluation reveals that the uncommon data generated by our method is hard to be defended by the existing defense techniques with the average defense success rate reduced by 35%. Our results call for attention and necessity to generate more diverse data for evaluating quality assurance solutions of DL software.
format text
author ZHANG, Xiyue
XIE, Xiaofei
MA, Lei
DU, Xiaoning
HU, Qiang
LIU, Yang
ZHAO, Jianjun
SUN, Meng
author_facet ZHANG, Xiyue
XIE, Xiaofei
MA, Lei
DU, Xiaoning
HU, Qiang
LIU, Yang
ZHAO, Jianjun
SUN, Meng
author_sort ZHANG, Xiyue
title Towards characterizing adversarial defects of deep learning software from the lens of uncertainty
title_short Towards characterizing adversarial defects of deep learning software from the lens of uncertainty
title_full Towards characterizing adversarial defects of deep learning software from the lens of uncertainty
title_fullStr Towards characterizing adversarial defects of deep learning software from the lens of uncertainty
title_full_unstemmed Towards characterizing adversarial defects of deep learning software from the lens of uncertainty
title_sort towards characterizing adversarial defects of deep learning software from the lens of uncertainty
publisher Institutional Knowledge at Singapore Management University
publishDate 2020
url https://ink.library.smu.edu.sg/sis_research/7084
https://ink.library.smu.edu.sg/context/sis_research/article/8087/viewcontent/3377811.3380368.pdf
_version_ 1770576208964091904