Robust learning with probabilistic relaxation using hypothesis-test-based sampling

In recent years, deep learning has been a vital tool in various tasks. The performance of a neural network is usually evaluated by empirical risk minimization. However, robustness issues have gained great concern which can be fatal in safety-critical applications. Adversarial training can mitigate t...

全面介紹

Saved in:
書目詳細資料
主要作者: WANG, Zilin
格式: text
語言:English
出版: Institutional Knowledge at Singapore Management University 2024
主題:
在線閱讀:https://ink.library.smu.edu.sg/etd_coll/668
https://ink.library.smu.edu.sg/context/etd_coll/article/1666/viewcontent/GPIS_AY2022_MbR_Wang_Zilin.pdf
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Singapore Management University
語言: English
實物特徵
總結:In recent years, deep learning has been a vital tool in various tasks. The performance of a neural network is usually evaluated by empirical risk minimization. However, robustness issues have gained great concern which can be fatal in safety-critical applications. Adversarial training can mitigate the issue by minimizing the loss of worst-case perturbations of data. It is effective in improving the robustness of the model, but it is too conservative, and the plain performance of the model can be unsatisfying. Probabilistic Robust Learning (PRL) empirically balances the average- and worst-case performance while the robustness of the model is not provable in most of the current work. This thesis proposes a novel approach for robust learning by sampling based on hypothesis testing. The approach guides the training to improve robustness in a highly efficient probabilistic robustness setting. It also enforces the robustness to be certified provably. We evaluate our new framework by generating adversarial samples from several popular datasets and comparing the performance with other state-of-the-art works. The results of our approach illustrate a close performance on simple classification tasks and a better performance on more difficult tasks compared to the state-of-the-art works.