Stable neural ODE with Lyapunov-stable equilibrium points for defending against adversarial attacks

Deep neural networks (DNNs) are well-known to be vulnerable to adversarial attacks, where malicious human-imperceptible perturbations are included in the input to the deep network to fool it into making a wrong classification. Recent studies have demonstrated that neural Ordinary Differential Equati...

Full description

Saved in:
Bibliographic Details
Main Authors: Kang, Qiyu, Song, Yang, Ding, Qinxu, Tay, Wee Peng
Other Authors: School of Electrical and Electronic Engineering
Format: Conference or Workshop Item
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/166692
https://nips.cc/Conferences/2021
https://proceedings.neurips.cc/
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-166692
record_format dspace
spelling sg-ntu-dr.10356-1666922023-05-12T15:39:53Z Stable neural ODE with Lyapunov-stable equilibrium points for defending against adversarial attacks Kang, Qiyu Song, Yang Ding, Qinxu Tay, Wee Peng School of Electrical and Electronic Engineering 35th Conference on Neural Information Processing Systems (NeurIPS 2021) Continental-NTU Corporate Lab Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Deep Neural Networks Adversarial Attacks Deep neural networks (DNNs) are well-known to be vulnerable to adversarial attacks, where malicious human-imperceptible perturbations are included in the input to the deep network to fool it into making a wrong classification. Recent studies have demonstrated that neural Ordinary Differential Equations (ODEs) are intrinsically more robust against adversarial attacks compared to vanilla DNNs. In this work, we propose a stable neural ODE with Lyapunov-stable equilibrium points for defending against adversarial attacks (SODEF). By ensuring that the equilibrium points of the ODE solution used as part of SODEF is Lyapunov-stable, the ODE solution for an input with a small perturbation converges to the same solution as the unperturbed input. We provide theoretical results that give insights into the stability of SODEF as well as the choice of regularizers to ensure its stability. Our analysis suggests that our proposed regularizers force the extracted feature points to be within a neighborhood of the Lyapunov-stable equilibrium points of the ODE. SODEF is compatible with many defense methods and can be applied to any neural network's final regressor layer to enhance its stability against adversarial attacks. Agency for Science, Technology and Research (A*STAR) Published version This research is supported in part by A*STAR under its RIE2020 Advanced Manufacturing and Engineering (AME) Industry Alignment Fund – Pre Positioning (IAF-PP) (Grant No. A19D6a0053) and the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). 2023-05-09T01:45:59Z 2023-05-09T01:45:59Z 2021 Conference Paper Kang, Q., Song, Y., Ding, Q. & Tay, W. P. (2021). Stable neural ODE with Lyapunov-stable equilibrium points for defending against adversarial attacks. 35th Conference on Neural Information Processing Systems (NeurIPS 2021), 1-13. https://hdl.handle.net/10356/166692 https://nips.cc/Conferences/2021 https://proceedings.neurips.cc/ 1 13 en A19D6a0053 © 2021 The Author(s). All rights reserved. This paper was published in Proceedings of 35th Conference on Neural Information Processing Systems (NeurIPS 2021) and is made available with permission of The Author(s). application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Deep Neural Networks
Adversarial Attacks
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Deep Neural Networks
Adversarial Attacks
Kang, Qiyu
Song, Yang
Ding, Qinxu
Tay, Wee Peng
Stable neural ODE with Lyapunov-stable equilibrium points for defending against adversarial attacks
description Deep neural networks (DNNs) are well-known to be vulnerable to adversarial attacks, where malicious human-imperceptible perturbations are included in the input to the deep network to fool it into making a wrong classification. Recent studies have demonstrated that neural Ordinary Differential Equations (ODEs) are intrinsically more robust against adversarial attacks compared to vanilla DNNs. In this work, we propose a stable neural ODE with Lyapunov-stable equilibrium points for defending against adversarial attacks (SODEF). By ensuring that the equilibrium points of the ODE solution used as part of SODEF is Lyapunov-stable, the ODE solution for an input with a small perturbation converges to the same solution as the unperturbed input. We provide theoretical results that give insights into the stability of SODEF as well as the choice of regularizers to ensure its stability. Our analysis suggests that our proposed regularizers force the extracted feature points to be within a neighborhood of the Lyapunov-stable equilibrium points of the ODE. SODEF is compatible with many defense methods and can be applied to any neural network's final regressor layer to enhance its stability against adversarial attacks.
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Kang, Qiyu
Song, Yang
Ding, Qinxu
Tay, Wee Peng
format Conference or Workshop Item
author Kang, Qiyu
Song, Yang
Ding, Qinxu
Tay, Wee Peng
author_sort Kang, Qiyu
title Stable neural ODE with Lyapunov-stable equilibrium points for defending against adversarial attacks
title_short Stable neural ODE with Lyapunov-stable equilibrium points for defending against adversarial attacks
title_full Stable neural ODE with Lyapunov-stable equilibrium points for defending against adversarial attacks
title_fullStr Stable neural ODE with Lyapunov-stable equilibrium points for defending against adversarial attacks
title_full_unstemmed Stable neural ODE with Lyapunov-stable equilibrium points for defending against adversarial attacks
title_sort stable neural ode with lyapunov-stable equilibrium points for defending against adversarial attacks
publishDate 2023
url https://hdl.handle.net/10356/166692
https://nips.cc/Conferences/2021
https://proceedings.neurips.cc/
_version_ 1770565221289558016