Self-checking deep neural networks for anomalies and adversaries in deployment
Deep Neural Networks (DNNs) have been widely adopted, yet DNN models are surprisingly unreliable, which raises significant concerns about their use in critical domains. In this work, we propose that runtime DNN mistakes can be quickly detected and properly dealt with in deployment, especially in set...
Saved in:
Main Authors: | XIAO, Yan, BESCHASTNIKH, Ivan, LIN, Yun, HUNDAL, Rajdeep Singh, XIE, Xiaofei, ROSENBLUM, David S., DONG, Jin Song |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2022
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/7493 https://ink.library.smu.edu.sg/context/sis_research/article/8496/viewcontent/tdsc22_selfchecker.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Attack as defense: Characterizing adversarial examples using robustness
by: ZHAO, Zhe, et al.
Published: (2021) -
Robust data-driven adversarial false data injection attack detection method with deep Q-network in power systems
by: Ran, Xiaohong, et al.
Published: (2024) -
Towards characterizing adversarial defects of deep learning software from the lens of uncertainty
by: ZHANG, Xiyue, et al.
Published: (2020) -
Breaking neural reasoning architectures with metamorphic relation-based adversarial examples
by: CHAN, Alvin, et al.
Published: (2021) -
Adversarial attacks and mitigation for anomaly detectors of cyber-physical systems
by: JIA, Yifan, et al.
Published: (2021)