Investigating the causes of the vulnerability of CNNs to adversarial perturbations: learning objective, model components, and learned representations
This work focuses on understanding how adversarial perturbations can disrupt the behavior of Convolutional Neural Networks (CNNs). Here, it is hypothesized that some components may be more vulnerable than others, unlike other research that considers a model vulnerable as a whole. Identifying model-s...
Saved in:
Main Author: | Coppola, Davide |
---|---|
Other Authors: | Guan Cuntai |
Format: | Thesis-Master by Research |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/171336 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Adversarial robustness of deep reinforcement learning
by: Qu, Xinghua
Published: (2022) -
Adversarial training using meta-learning for BERT
by: Low, Timothy Jing Haen
Published: (2022) -
Vision language representation learning
by: Yang, Xiaofeng
Published: (2023) -
Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks
by: Bai, Tao
Published: (2022) -
Adversarial attacks on RNN-based deep learning systems
by: Loi, Chii Lek
Published: (2020)