Detecting adversarial samples for deep neural networks through mutation testing
Deep Neural Networks (DNNs) are adept at many tasks, with the more well-known task of image recognition using a subset of DNNs called Convolutional Neural Networks (CNNs). However, they are prone to attacks called adversarial attacks. Adversarial attacks are malicious modifications made on input sam...
Saved in:
主要作者: | |
---|---|
其他作者: | |
格式: | Final Year Project |
語言: | English |
出版: |
Nanyang Technological University
2020
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/138719 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
成為第一個發表評論!