Scalable verification of quantized neural networks

Formal verification of neural networks is an active topic of research, and recent advances have significantly increased the size of the networks that verification tools can handle. However, most methods are designed for verification of an idealized model of the actual network which works over real a...

Full description

Saved in:
Bibliographic Details
Main Authors: HENZINGER, Thomas A., LECHNER, Mathias, ZIKELIC, Dorde
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2021
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9074
https://ink.library.smu.edu.sg/context/sis_research/article/10077/viewcontent/16496_Article_Text_19990_1_2_20210518.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10077
record_format dspace
spelling sg-smu-ink.sis_research-100772024-08-01T15:21:36Z Scalable verification of quantized neural networks HENZINGER, Thomas A. LECHNER, Mathias ZIKELIC, Dorde Formal verification of neural networks is an active topic of research, and recent advances have significantly increased the size of the networks that verification tools can handle. However, most methods are designed for verification of an idealized model of the actual network which works over real arithmetic and ignores rounding imprecisions. This idealization is in stark contrast to network quantization, which is a technique that trades numerical precision for computational efficiency and is, therefore, often applied in practice. Neglecting rounding errors of such low-bit quantized neural networks has been shown to lead to wrong conclusions about the network’s correctness. Thus, the desired approach for verifying quantized neural networks would be one that takes these rounding errors into account. In this paper, we show that verifying the bitexact implementation of quantized neural networks with bitvector specifications is PSPACE-hard, even though verifying idealized real-valued networks and satisfiability of bit-vector specifications alone are each in NP. Furthermore, we explore several practical heuristics toward closing the complexity gap between idealized and bit-exact verification. In particular, we propose three techniques for making SMT-based verification of quantized neural networks more scalable. Our experiments demonstrate that our proposed methods allow a speedup of up to three orders of magnitude over existing approaches. 2021-02-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9074 info:doi/10.1609/aaai.v35i5.16496 https://ink.library.smu.edu.sg/context/sis_research/article/10077/viewcontent/16496_Article_Text_19990_1_2_20210518.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Artificial Intelligence and Robotics OS and Networks
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Artificial Intelligence and Robotics
OS and Networks
spellingShingle Artificial Intelligence and Robotics
OS and Networks
HENZINGER, Thomas A.
LECHNER, Mathias
ZIKELIC, Dorde
Scalable verification of quantized neural networks
description Formal verification of neural networks is an active topic of research, and recent advances have significantly increased the size of the networks that verification tools can handle. However, most methods are designed for verification of an idealized model of the actual network which works over real arithmetic and ignores rounding imprecisions. This idealization is in stark contrast to network quantization, which is a technique that trades numerical precision for computational efficiency and is, therefore, often applied in practice. Neglecting rounding errors of such low-bit quantized neural networks has been shown to lead to wrong conclusions about the network’s correctness. Thus, the desired approach for verifying quantized neural networks would be one that takes these rounding errors into account. In this paper, we show that verifying the bitexact implementation of quantized neural networks with bitvector specifications is PSPACE-hard, even though verifying idealized real-valued networks and satisfiability of bit-vector specifications alone are each in NP. Furthermore, we explore several practical heuristics toward closing the complexity gap between idealized and bit-exact verification. In particular, we propose three techniques for making SMT-based verification of quantized neural networks more scalable. Our experiments demonstrate that our proposed methods allow a speedup of up to three orders of magnitude over existing approaches.
format text
author HENZINGER, Thomas A.
LECHNER, Mathias
ZIKELIC, Dorde
author_facet HENZINGER, Thomas A.
LECHNER, Mathias
ZIKELIC, Dorde
author_sort HENZINGER, Thomas A.
title Scalable verification of quantized neural networks
title_short Scalable verification of quantized neural networks
title_full Scalable verification of quantized neural networks
title_fullStr Scalable verification of quantized neural networks
title_full_unstemmed Scalable verification of quantized neural networks
title_sort scalable verification of quantized neural networks
publisher Institutional Knowledge at Singapore Management University
publishDate 2021
url https://ink.library.smu.edu.sg/sis_research/9074
https://ink.library.smu.edu.sg/context/sis_research/article/10077/viewcontent/16496_Article_Text_19990_1_2_20210518.pdf
_version_ 1814047724399493120