Non-vacuous generalization bounds for adversarial risk in stochastic neural networks

Adversarial examples are manipulated samples used to deceive machine learning models, posing a serious threat in safety-critical applications. Existing safety certificates for machine learning models are limited to individual input examples, failing to capture generalization to unseen data. To addre...

Full description

Saved in:
Bibliographic Details
Main Authors: WALEED, Mustafa, PHILIPP, Liznerski, LEDENT, Antoine, DENNIS, Wagner, PUYU, Wang, MARIUS, Kloft
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9306
https://ink.library.smu.edu.sg/context/sis_research/article/10306/viewcontent/mustafa24a.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10306
record_format dspace
spelling sg-smu-ink.sis_research-103062024-09-21T15:29:22Z Non-vacuous generalization bounds for adversarial risk in stochastic neural networks WALEED, Mustafa PHILIPP, Liznerski LEDENT, Antoine DENNIS, Wagner PUYU, Wang MARIUS, Kloft Adversarial examples are manipulated samples used to deceive machine learning models, posing a serious threat in safety-critical applications. Existing safety certificates for machine learning models are limited to individual input examples, failing to capture generalization to unseen data. To address this limitation, we propose novel generalization bounds based on the PAC-Bayesian and randomized smoothing frameworks, providing certificates that predict the model’s performance and robustness on unseen test samples based solely on the training data. We present an effective procedure to train and compute the first non-vacuous generalization bounds for neural networks in adversarial settings. Experimental results on the widely recognized MNIST and CIFAR-10 datasets demonstrate the efficacy of our approach, yielding the first robust risk certificates for stochastic convolutional neural networks under the $L_2$ threat model. Our method offers valuable tools for evaluating model susceptibility to real-world adversarial risks. 2024-05-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9306 info:doi/https://proceedings.mlr.press/v238/mustafa24a.html https://ink.library.smu.edu.sg/context/sis_research/article/10306/viewcontent/mustafa24a.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Bayesian Generalisation Generalization bound Machine learning models Neural-networks Performance Safety critical applications Stochastic neural network Test samples Training data Databases and Information Systems Data Storage Systems
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Bayesian
Generalisation
Generalization bound
Machine learning models
Neural-networks
Performance
Safety critical applications
Stochastic neural network
Test samples
Training data
Databases and Information Systems
Data Storage Systems
spellingShingle Bayesian
Generalisation
Generalization bound
Machine learning models
Neural-networks
Performance
Safety critical applications
Stochastic neural network
Test samples
Training data
Databases and Information Systems
Data Storage Systems
WALEED, Mustafa
PHILIPP, Liznerski
LEDENT, Antoine
DENNIS, Wagner
PUYU, Wang
MARIUS, Kloft
Non-vacuous generalization bounds for adversarial risk in stochastic neural networks
description Adversarial examples are manipulated samples used to deceive machine learning models, posing a serious threat in safety-critical applications. Existing safety certificates for machine learning models are limited to individual input examples, failing to capture generalization to unseen data. To address this limitation, we propose novel generalization bounds based on the PAC-Bayesian and randomized smoothing frameworks, providing certificates that predict the model’s performance and robustness on unseen test samples based solely on the training data. We present an effective procedure to train and compute the first non-vacuous generalization bounds for neural networks in adversarial settings. Experimental results on the widely recognized MNIST and CIFAR-10 datasets demonstrate the efficacy of our approach, yielding the first robust risk certificates for stochastic convolutional neural networks under the $L_2$ threat model. Our method offers valuable tools for evaluating model susceptibility to real-world adversarial risks.
format text
author WALEED, Mustafa
PHILIPP, Liznerski
LEDENT, Antoine
DENNIS, Wagner
PUYU, Wang
MARIUS, Kloft
author_facet WALEED, Mustafa
PHILIPP, Liznerski
LEDENT, Antoine
DENNIS, Wagner
PUYU, Wang
MARIUS, Kloft
author_sort WALEED, Mustafa
title Non-vacuous generalization bounds for adversarial risk in stochastic neural networks
title_short Non-vacuous generalization bounds for adversarial risk in stochastic neural networks
title_full Non-vacuous generalization bounds for adversarial risk in stochastic neural networks
title_fullStr Non-vacuous generalization bounds for adversarial risk in stochastic neural networks
title_full_unstemmed Non-vacuous generalization bounds for adversarial risk in stochastic neural networks
title_sort non-vacuous generalization bounds for adversarial risk in stochastic neural networks
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9306
https://ink.library.smu.edu.sg/context/sis_research/article/10306/viewcontent/mustafa24a.pdf
_version_ 1814047876135780352