Robust learning with probabilistic relaxation using hypothesis-test-based sampling

In recent years, deep learning has been a vital tool in various tasks. The performance of a neural network is usually evaluated by empirical risk minimization. However, robustness issues have gained great concern which can be fatal in safety-critical applications. Adversarial training can mitigate t...

Full description

Saved in:
Bibliographic Details
Main Author: WANG, Zilin
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/etd_coll/668
https://ink.library.smu.edu.sg/context/etd_coll/article/1666/viewcontent/GPIS_AY2022_MbR_Wang_Zilin.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.etd_coll-1666
record_format dspace
spelling sg-smu-ink.etd_coll-16662025-02-13T05:48:13Z Robust learning with probabilistic relaxation using hypothesis-test-based sampling WANG, Zilin In recent years, deep learning has been a vital tool in various tasks. The performance of a neural network is usually evaluated by empirical risk minimization. However, robustness issues have gained great concern which can be fatal in safety-critical applications. Adversarial training can mitigate the issue by minimizing the loss of worst-case perturbations of data. It is effective in improving the robustness of the model, but it is too conservative, and the plain performance of the model can be unsatisfying. Probabilistic Robust Learning (PRL) empirically balances the average- and worst-case performance while the robustness of the model is not provable in most of the current work. This thesis proposes a novel approach for robust learning by sampling based on hypothesis testing. The approach guides the training to improve robustness in a highly efficient probabilistic robustness setting. It also enforces the robustness to be certified provably. We evaluate our new framework by generating adversarial samples from several popular datasets and comparing the performance with other state-of-the-art works. The results of our approach illustrate a close performance on simple classification tasks and a better performance on more difficult tasks compared to the state-of-the-art works. 2024-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/etd_coll/668 https://ink.library.smu.edu.sg/context/etd_coll/article/1666/viewcontent/GPIS_AY2022_MbR_Wang_Zilin.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Dissertations and Theses Collection (Open Access) eng Institutional Knowledge at Singapore Management University AI Security AI Robustness Deep Learning Artificial Intelligence and Robotics
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic AI Security
AI Robustness
Deep Learning
Artificial Intelligence and Robotics
spellingShingle AI Security
AI Robustness
Deep Learning
Artificial Intelligence and Robotics
WANG, Zilin
Robust learning with probabilistic relaxation using hypothesis-test-based sampling
description In recent years, deep learning has been a vital tool in various tasks. The performance of a neural network is usually evaluated by empirical risk minimization. However, robustness issues have gained great concern which can be fatal in safety-critical applications. Adversarial training can mitigate the issue by minimizing the loss of worst-case perturbations of data. It is effective in improving the robustness of the model, but it is too conservative, and the plain performance of the model can be unsatisfying. Probabilistic Robust Learning (PRL) empirically balances the average- and worst-case performance while the robustness of the model is not provable in most of the current work. This thesis proposes a novel approach for robust learning by sampling based on hypothesis testing. The approach guides the training to improve robustness in a highly efficient probabilistic robustness setting. It also enforces the robustness to be certified provably. We evaluate our new framework by generating adversarial samples from several popular datasets and comparing the performance with other state-of-the-art works. The results of our approach illustrate a close performance on simple classification tasks and a better performance on more difficult tasks compared to the state-of-the-art works.
format text
author WANG, Zilin
author_facet WANG, Zilin
author_sort WANG, Zilin
title Robust learning with probabilistic relaxation using hypothesis-test-based sampling
title_short Robust learning with probabilistic relaxation using hypothesis-test-based sampling
title_full Robust learning with probabilistic relaxation using hypothesis-test-based sampling
title_fullStr Robust learning with probabilistic relaxation using hypothesis-test-based sampling
title_full_unstemmed Robust learning with probabilistic relaxation using hypothesis-test-based sampling
title_sort robust learning with probabilistic relaxation using hypothesis-test-based sampling
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/etd_coll/668
https://ink.library.smu.edu.sg/context/etd_coll/article/1666/viewcontent/GPIS_AY2022_MbR_Wang_Zilin.pdf
_version_ 1827070760457338880