Certified robust accuracy of neural networks are bounded due to Bayes errors
Adversarial examples pose a security threat to many critical systems built on neural networks. While certified training improves robustness, it also decreases accuracy noticeably. Despite various proposals for addressing this issue, the significant accuracy drop remains. More importantly, it is not...
Saved in:
Main Authors: | , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9178 https://ink.library.smu.edu.sg/context/sis_research/article/10183/viewcontent/CERTIFIED_ROBUST.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-10183 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-101832024-08-13T05:28:18Z Certified robust accuracy of neural networks are bounded due to Bayes errors ZHANG, Ruihan SUN, Jun Adversarial examples pose a security threat to many critical systems built on neural networks. While certified training improves robustness, it also decreases accuracy noticeably. Despite various proposals for addressing this issue, the significant accuracy drop remains. More importantly, it is not clear whether there is a certain fundamental limit on achieving robustness whilst maintaining accuracy. In this work, we offer a novel perspective based on Bayes errors. By adopting Bayes error to robustness analysis, we investigate the limit of certified robust accuracy, taking into account data distribution uncertainties. We first show that the accuracy inevitably decreases in the pursuit of robustness due to changed Bayes error in the altered data distribution. Subsequently, we establish an upper bound for certified robust accuracy, considering the distribution of individual classes and their boundaries. Our theoretical results are empirically evaluated on real-world datasets and are shown to be consistent with the limited success of existing certified training results, e.g., for CIFAR10, our analysis results in an upper bound (of certified robust accuracy) of 67.49%, meanwhile existing approaches are only able to increase it from 53.89% in 2017 to 62.84% in 2023. 2024-07-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9178 info:doi/10.1007/978-3-031-65630-9_18 https://ink.library.smu.edu.sg/context/sis_research/article/10183/viewcontent/CERTIFIED_ROBUST.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Graphics and Human Computer Interfaces Software Engineering |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Graphics and Human Computer Interfaces Software Engineering |
spellingShingle |
Graphics and Human Computer Interfaces Software Engineering ZHANG, Ruihan SUN, Jun Certified robust accuracy of neural networks are bounded due to Bayes errors |
description |
Adversarial examples pose a security threat to many critical systems built on neural networks. While certified training improves robustness, it also decreases accuracy noticeably. Despite various proposals for addressing this issue, the significant accuracy drop remains. More importantly, it is not clear whether there is a certain fundamental limit on achieving robustness whilst maintaining accuracy. In this work, we offer a novel perspective based on Bayes errors. By adopting Bayes error to robustness analysis, we investigate the limit of certified robust accuracy, taking into account data distribution uncertainties. We first show that the accuracy inevitably decreases in the pursuit of robustness due to changed Bayes error in the altered data distribution. Subsequently, we establish an upper bound for certified robust accuracy, considering the distribution of individual classes and their boundaries. Our theoretical results are empirically evaluated on real-world datasets and are shown to be consistent with the limited success of existing certified training results, e.g., for CIFAR10, our analysis results in an upper bound (of certified robust accuracy) of 67.49%, meanwhile existing approaches are only able to increase it from 53.89% in 2017 to 62.84% in 2023. |
format |
text |
author |
ZHANG, Ruihan SUN, Jun |
author_facet |
ZHANG, Ruihan SUN, Jun |
author_sort |
ZHANG, Ruihan |
title |
Certified robust accuracy of neural networks are bounded due to Bayes errors |
title_short |
Certified robust accuracy of neural networks are bounded due to Bayes errors |
title_full |
Certified robust accuracy of neural networks are bounded due to Bayes errors |
title_fullStr |
Certified robust accuracy of neural networks are bounded due to Bayes errors |
title_full_unstemmed |
Certified robust accuracy of neural networks are bounded due to Bayes errors |
title_sort |
certified robust accuracy of neural networks are bounded due to bayes errors |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2024 |
url |
https://ink.library.smu.edu.sg/sis_research/9178 https://ink.library.smu.edu.sg/context/sis_research/article/10183/viewcontent/CERTIFIED_ROBUST.pdf |
_version_ |
1814047782945685504 |