What it thinks is important is important : robustness transfers through input gradients
Adversarial perturbations are imperceptible changes to input pixels that can change the prediction of deep learning models. Learned weights of models robust to such perturbations are previously found to be transferable across different tasks but this applies only if the model architecture for the so...
Saved in:
Main Authors: | Chan, Alvin, Tay, Yi, Ong, Yew-Soon |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/144389 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Strategic thinking skills and their economic importance
by: CHOI, Syngjoo, et al.
Published: (2023) -
Import multiplier in input - output analysis
by: Bui, Trinh, et al.
Published: (2016) -
ROBUST LEARNING AND PREDICTION IN DEEP LEARNING
by: ZHANG JINGFENG
Published: (2021) -
Generalizing transfer Bayesian optimization to source-target heterogeneity
by: Min, Alan Tan Wei, et al.
Published: (2022) -
Personality and Group Performance: The Importance of Personality Composition and Work Tasks
by: KRAMER, Amit, et al.
Published: (2014)