Deepcause: Verifying neural networks with abstraction refinement
Neural networks have been becoming essential parts in many safety-critical systems (suchas self-driving cars and medical diagnosis). Due to that, it is desirable that neural networksnot only have high accuracy (which traditionally can be validated using a test set) but alsosatisfy some safety proper...
Saved in:
Main Author: | NGUYEN HUA GIA PHUC |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2022
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/etd_coll/447 https://ink.library.smu.edu.sg/context/etd_coll/article/1445/viewcontent/GPIS_AY2019_MbR_Nguyen_Hua_Gia_Phuc.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Towards an effective and interpretable refinement approach for DNN verification
by: LI, Jiaying, et al.
Published: (2023) -
Automatically `Verifying’ discrete-time complex systems through learning, abstraction and refinement
by: WANG, Jingyi, et al.
Published: (2018) -
Towards interpreting recurrent neural networks through probabilistic abstraction
by: DONG, Guoliang, et al.
Published: (2020) -
QVIP: An ILP-based formal verification approach for quantized neural networks
by: ZHANG, Yedi, et al.
Published: (2022) -
A learner-verifier framework for neural network controllers and certificates of stochastic systems
by: CHATTERJEE, Krishnendu, et al.
Published: (2023)