Non-vacuous generalization bounds for adversarial risk in stochastic neural networks
Adversarial examples are manipulated samples used to deceive machine learning models, posing a serious threat in safety-critical applications. Existing safety certificates for machine learning models are limited to individual input examples, failing to capture generalization to unseen data. To addre...
Saved in:
Main Authors: | WALEED, Mustafa, PHILIPP, Liznerski, LEDENT, Antoine, DENNIS, Wagner, PUYU, Wang, MARIUS, Kloft |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9306 https://ink.library.smu.edu.sg/context/sis_research/article/10306/viewcontent/mustafa24a.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Norm-based generalisation bounds for deep multi-class convolutional neural networks
by: LEDENT, Antoine, et al.
Published: (2021) -
Fine-grained analysis of structured output prediction
by: MUSTAFA, Waleed, et al.
Published: (2021) -
Innovations in Bayesian networks : theory and applications
Published: (2017) -
Deep neural network-based bandwidth enhancement of photoacoustic data
by: Gutta, Sreedevi, et al.
Published: (2017) -
Stealthy and robust backdoor attack on deep neural networks based on data augmentation
by: Xu, Chaohui, et al.
Published: (2024)