A learning and masking approach to secure learning
Deep Neural Networks (DNNs) have been shown to be vulnerable against adversarial examples, which are data points cleverly constructed to fool the classifier. Such attacks can be devastating in practice, especially as DNNs are being applied to ever increasing critical tasks like image recognition in...
Saved in:
Main Authors: | NGUYEN, Linh, WANG, Sky, SINHA, Arunesh |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2018
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/4793 https://ink.library.smu.edu.sg/context/sis_research/article/5796/viewcontent/1709.04447.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
by: Liwei Song, et al.
Published: (2020) -
Regret-based defense in adversarial reinforcement learning
by: BELAIRE, Roman, et al.
Published: (2024) -
ROBUST LEARNING AND PREDICTION IN DEEP LEARNING
by: ZHANG JINGFENG
Published: (2021) -
Robust data-driven adversarial false data injection attack detection method with deep Q-network in power systems
by: Ran, Xiaohong, et al.
Published: (2024) -
Attack as defense: Characterizing adversarial examples using robustness
by: ZHAO, Zhe, et al.
Published: (2021)