Deepcause: Verifying neural networks with abstraction refinement
Neural networks have been becoming essential parts in many safety-critical systems (suchas self-driving cars and medical diagnosis). Due to that, it is desirable that neural networksnot only have high accuracy (which traditionally can be validated using a test set) but alsosatisfy some safety proper...
Saved in:
Main Author: | |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2022
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/etd_coll/447 https://ink.library.smu.edu.sg/context/etd_coll/article/1445/viewcontent/GPIS_AY2019_MbR_Nguyen_Hua_Gia_Phuc.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.etd_coll-1445 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.etd_coll-14452023-02-15T07:24:02Z Deepcause: Verifying neural networks with abstraction refinement NGUYEN HUA GIA PHUC, Neural networks have been becoming essential parts in many safety-critical systems (suchas self-driving cars and medical diagnosis). Due to that, it is desirable that neural networksnot only have high accuracy (which traditionally can be validated using a test set) but alsosatisfy some safety properties (such as robustness, fairness, or free of backdoor). To verifyneural networks against desired safety properties, there are many approaches developedbased on classical abstract interpretation. However, like in program verification, theseapproaches suffer from false alarms, which may hinder the deployment of the networks. One natural remedy to tackle the problem adopted from program verification community is counterexample-guided abstraction refinement (CEGAR). The application of CEGAR in neural network verification is, however, highly non-trivial due to the complicationraised from both neural networks and abstractions. In this thesis, we propose a methodto enhance abstract interpretation in verifying neural networks through an applicationof CEGAR in two steps. First, we employ an optimization-based procedure to validateabstractions along each propagation step and identify problematic abstraction via counterexample searching. Then, we leverage a causality approach to select the most likelyproblematic components of the abstraction and refine them accordingly. To evaluate our approach, we have implemented a prototype named DeepCause based onDeepPoly and take local robustness as the target safety property to verify. The evaluation shows that our proposal can outperform DeepPoly and RefinePoly in all benchmarknetworks. We should note that our idea is not limited to specific abstract domain and webelieve it is a promising step towards enhancing verification of complex neural networksystems. 2022-07-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/etd_coll/447 https://ink.library.smu.edu.sg/context/etd_coll/article/1445/viewcontent/GPIS_AY2019_MbR_Nguyen_Hua_Gia_Phuc.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Dissertations and Theses Collection (Open Access) eng Institutional Knowledge at Singapore Management University neural network verification abstraction refinement CEGAR OS and Networks Software Engineering |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
neural network verification abstraction refinement CEGAR OS and Networks Software Engineering |
spellingShingle |
neural network verification abstraction refinement CEGAR OS and Networks Software Engineering NGUYEN HUA GIA PHUC, Deepcause: Verifying neural networks with abstraction refinement |
description |
Neural networks have been becoming essential parts in many safety-critical systems (suchas self-driving cars and medical diagnosis). Due to that, it is desirable that neural networksnot only have high accuracy (which traditionally can be validated using a test set) but alsosatisfy some safety properties (such as robustness, fairness, or free of backdoor). To verifyneural networks against desired safety properties, there are many approaches developedbased on classical abstract interpretation. However, like in program verification, theseapproaches suffer from false alarms, which may hinder the deployment of the networks.
One natural remedy to tackle the problem adopted from program verification community is counterexample-guided abstraction refinement (CEGAR). The application of CEGAR in neural network verification is, however, highly non-trivial due to the complicationraised from both neural networks and abstractions. In this thesis, we propose a methodto enhance abstract interpretation in verifying neural networks through an applicationof CEGAR in two steps. First, we employ an optimization-based procedure to validateabstractions along each propagation step and identify problematic abstraction via counterexample searching. Then, we leverage a causality approach to select the most likelyproblematic components of the abstraction and refine them accordingly.
To evaluate our approach, we have implemented a prototype named DeepCause based onDeepPoly and take local robustness as the target safety property to verify. The evaluation shows that our proposal can outperform DeepPoly and RefinePoly in all benchmarknetworks. We should note that our idea is not limited to specific abstract domain and webelieve it is a promising step towards enhancing verification of complex neural networksystems. |
format |
text |
author |
NGUYEN HUA GIA PHUC, |
author_facet |
NGUYEN HUA GIA PHUC, |
author_sort |
NGUYEN HUA GIA PHUC, |
title |
Deepcause: Verifying neural networks with abstraction refinement |
title_short |
Deepcause: Verifying neural networks with abstraction refinement |
title_full |
Deepcause: Verifying neural networks with abstraction refinement |
title_fullStr |
Deepcause: Verifying neural networks with abstraction refinement |
title_full_unstemmed |
Deepcause: Verifying neural networks with abstraction refinement |
title_sort |
deepcause: verifying neural networks with abstraction refinement |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2022 |
url |
https://ink.library.smu.edu.sg/etd_coll/447 https://ink.library.smu.edu.sg/context/etd_coll/article/1445/viewcontent/GPIS_AY2019_MbR_Nguyen_Hua_Gia_Phuc.pdf |
_version_ |
1770567871292768256 |