Robust learning with probabilistic relaxation using hypothesis-test-based sampling
In recent years, deep learning has been a vital tool in various tasks. The performance of a neural network is usually evaluated by empirical risk minimization. However, robustness issues have gained great concern which can be fatal in safety-critical applications. Adversarial training can mitigate t...
Saved in:
Main Author: | |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/etd_coll/668 https://ink.library.smu.edu.sg/context/etd_coll/article/1666/viewcontent/GPIS_AY2022_MbR_Wang_Zilin.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | In recent years, deep learning has been a vital tool in various tasks. The performance of a neural network is usually evaluated by empirical risk minimization. However, robustness issues have gained great concern which can be fatal in safety-critical applications. Adversarial training can mitigate the issue by minimizing the loss of worst-case perturbations of data. It is effective in improving the robustness of the model, but it is too conservative, and the plain performance of the model can be unsatisfying. Probabilistic Robust Learning (PRL) empirically balances the average- and worst-case performance while the robustness of the model is not provable in most of the current work. This thesis proposes a novel approach for robust learning by sampling based on hypothesis testing. The approach guides the training to improve robustness in a highly efficient probabilistic robustness setting. It also enforces the robustness to be certified provably.
We evaluate our new framework by generating adversarial samples from several popular datasets and comparing the performance with other state-of-the-art works. The results of our approach illustrate a close performance on simple classification tasks and a better performance on more difficult tasks compared to the state-of-the-art works. |
---|