導出完成 — 

Non-vacuous generalization bounds for adversarial risk in stochastic neural networks

Adversarial examples are manipulated samples used to deceive machine learning models, posing a serious threat in safety-critical applications. Existing safety certificates for machine learning models are limited to individual input examples, failing to capture generalization to unseen data. To addre...

全面介紹

Saved in:
書目詳細資料
Main Authors: WALEED, Mustafa, PHILIPP, Liznerski, LEDENT, Antoine, DENNIS, Wagner, PUYU, Wang, MARIUS, Kloft
格式: text
語言:English
出版: Institutional Knowledge at Singapore Management University 2024
主題:
在線閱讀:https://ink.library.smu.edu.sg/sis_research/9306
https://ink.library.smu.edu.sg/context/sis_research/article/10306/viewcontent/mustafa24a.pdf
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!