Deepcause: Verifying neural networks with abstraction refinement
Neural networks have been becoming essential parts in many safety-critical systems (suchas self-driving cars and medical diagnosis). Due to that, it is desirable that neural networksnot only have high accuracy (which traditionally can be validated using a test set) but alsosatisfy some safety proper...
Saved in:
Main Author: | |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2022
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/etd_coll/447 https://ink.library.smu.edu.sg/context/etd_coll/article/1445/viewcontent/GPIS_AY2019_MbR_Nguyen_Hua_Gia_Phuc.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | Neural networks have been becoming essential parts in many safety-critical systems (suchas self-driving cars and medical diagnosis). Due to that, it is desirable that neural networksnot only have high accuracy (which traditionally can be validated using a test set) but alsosatisfy some safety properties (such as robustness, fairness, or free of backdoor). To verifyneural networks against desired safety properties, there are many approaches developedbased on classical abstract interpretation. However, like in program verification, theseapproaches suffer from false alarms, which may hinder the deployment of the networks.
One natural remedy to tackle the problem adopted from program verification community is counterexample-guided abstraction refinement (CEGAR). The application of CEGAR in neural network verification is, however, highly non-trivial due to the complicationraised from both neural networks and abstractions. In this thesis, we propose a methodto enhance abstract interpretation in verifying neural networks through an applicationof CEGAR in two steps. First, we employ an optimization-based procedure to validateabstractions along each propagation step and identify problematic abstraction via counterexample searching. Then, we leverage a causality approach to select the most likelyproblematic components of the abstraction and refine them accordingly.
To evaluate our approach, we have implemented a prototype named DeepCause based onDeepPoly and take local robustness as the target safety property to verify. The evaluation shows that our proposal can outperform DeepPoly and RefinePoly in all benchmarknetworks. We should note that our idea is not limited to specific abstract domain and webelieve it is a promising step towards enhancing verification of complex neural networksystems. |
---|