What it thinks is important is important : robustness transfers through input gradients
Adversarial perturbations are imperceptible changes to input pixels that can change the prediction of deep learning models. Learned weights of models robust to such perturbations are previously found to be transferable across different tasks but this applies only if the model architecture for the so...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/144389 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-144389 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1443892020-11-03T02:00:56Z What it thinks is important is important : robustness transfers through input gradients Chan, Alvin Tay, Yi Ong, Yew-Soon School of Computer Science and Engineering 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Engineering Robustness Task Analysis Adversarial perturbations are imperceptible changes to input pixels that can change the prediction of deep learning models. Learned weights of models robust to such perturbations are previously found to be transferable across different tasks but this applies only if the model architecture for the source and target tasks is the same. Input gradients characterize how small changes at each input pixel affect the model output. Using only natural images, we show here that training a student model's input gradients to match those of a robust teacher model can gain robustness close to a strong baseline that is robustly trained from scratch. Through experiments in MNIST, CIFAR-10, CIFAR-100 and Tiny-ImageNet, we show that our proposed method, input gradient adversarial matching (IGAM), can transfer robustness across different tasks and even across different model architectures. This demonstrates that directly targeting the semantics of input gradients is a feasible way towards adversarial robustness. AI Singapore National Research Foundation (NRF) Accepted version This paper is supported in part by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-RP-2018-004), and the Data Science and Artificial Intelligence Research Center at Nanyang Technological University. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of National Research Foundation, Singapore. 2020-11-03T02:00:55Z 2020-11-03T02:00:55Z 2020 Conference Paper Chan, A., Tay, Y., & Ong, Y.-S. (2020). What it thinks is important is important : robustness transfers through input gradients. Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/CVPR42600.2020.00041 10.1109/CVPR42600.2020.00041 https://hdl.handle.net/10356/144389 en © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work is available at: https://doi.org/10.1109/CVPR42600.2020.00041 application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering Robustness Task Analysis |
spellingShingle |
Engineering Robustness Task Analysis Chan, Alvin Tay, Yi Ong, Yew-Soon What it thinks is important is important : robustness transfers through input gradients |
description |
Adversarial perturbations are imperceptible changes to input pixels that can change the prediction of deep learning models. Learned weights of models robust to such perturbations are previously found to be transferable across different tasks but this applies only if the model architecture for the source and target tasks is the same. Input gradients characterize how small changes at each input pixel affect the model output. Using only natural images, we show here that training a student model's input gradients to match those of a robust teacher model can gain robustness close to a strong baseline that is robustly trained from scratch. Through experiments in MNIST, CIFAR-10, CIFAR-100 and Tiny-ImageNet, we show that our proposed method, input gradient adversarial matching (IGAM), can transfer robustness across different tasks and even across different model architectures. This demonstrates that directly targeting the semantics of input gradients is a feasible way towards adversarial robustness. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Chan, Alvin Tay, Yi Ong, Yew-Soon |
format |
Conference or Workshop Item |
author |
Chan, Alvin Tay, Yi Ong, Yew-Soon |
author_sort |
Chan, Alvin |
title |
What it thinks is important is important : robustness transfers through input gradients |
title_short |
What it thinks is important is important : robustness transfers through input gradients |
title_full |
What it thinks is important is important : robustness transfers through input gradients |
title_fullStr |
What it thinks is important is important : robustness transfers through input gradients |
title_full_unstemmed |
What it thinks is important is important : robustness transfers through input gradients |
title_sort |
what it thinks is important is important : robustness transfers through input gradients |
publishDate |
2020 |
url |
https://hdl.handle.net/10356/144389 |
_version_ |
1686109390774468608 |