Stable neural ODE with Lyapunov-stable equilibrium points for defending against adversarial attacks
Deep neural networks (DNNs) are well-known to be vulnerable to adversarial attacks, where malicious human-imperceptible perturbations are included in the input to the deep network to fool it into making a wrong classification. Recent studies have demonstrated that neural Ordinary Differential Equati...
Saved in:
Main Authors: | Kang, Qiyu, Song, Yang, Ding, Qinxu, Tay, Wee Peng |
---|---|
Other Authors: | School of Electrical and Electronic Engineering |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/166692 https://nips.cc/Conferences/2021 https://proceedings.neurips.cc/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Adversarial attacks and robustness for segment anything model
by: Liu, Shifei
Published: (2024) -
Attack as defense: Characterizing adversarial examples using robustness
by: ZHAO, Zhe, et al.
Published: (2021) -
Deep-attack over the deep reinforcement learning
by: Li, Yang, et al.
Published: (2022) -
Towards robust rain removal against adversarial attacks: a comprehensive benchmark analysis and beyond
by: Yu, Yi, et al.
Published: (2022) -
Adversarial attack defenses for neural networks
by: Puah, Yi Hao
Published: (2024)