Audee: Automated testing for deep learning frameworks

Deep learning (DL) has been applied widely, and the quality of DL system becomes crucial, especially for safety-critical applications. Existing work mainly focuses on the quality analysis of DL models, but lacks attention to the underlying frameworks on which all DL models depend. In this work, we p...

Full description

Saved in:
Bibliographic Details
Main Authors: GUO, Qianyu, XIE, Xiaofei, LI, Yi, ZHANG, Xiaoyu, LIU, Yang, LI, Xiaohong, SHEN, Chao
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2020
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7077
https://ink.library.smu.edu.sg/context/sis_research/article/8080/viewcontent/3324884.3416571.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8080
record_format dspace
spelling sg-smu-ink.sis_research-80802022-04-07T08:08:20Z Audee: Automated testing for deep learning frameworks GUO, Qianyu XIE, Xiaofei LI, Yi ZHANG, Xiaoyu LIU, Yang LI, Xiaohong SHEN, Chao Deep learning (DL) has been applied widely, and the quality of DL system becomes crucial, especially for safety-critical applications. Existing work mainly focuses on the quality analysis of DL models, but lacks attention to the underlying frameworks on which all DL models depend. In this work, we propose Audee, a novel approach for testing DL frameworks and localizing bugs. Audee adopts a search-based approach and implements three different mutation strategies to generate diverse test cases by exploring combinations of model structures, parameters, weights and inputs. Audee is able to detect three types of bugs: logical bugs, crashes and Not-a-Number (NaN) errors. In particular, for logical bugs, Audee adopts a cross-reference check to detect behavioural inconsistencies across multiple frameworks (e.g., TensorFlow and PyTorch), which may indicate potential bugs in their implementations. For NaN errors, Audee adopts a heuristic-based approach to generate DNNs that tend to output outliers (i.e., too large or small values), and these values are likely to produce NaN. Furthermore, Audee leverages a causal-testing based technique to localize layers as well as parameters that cause inconsistencies or bugs. To evaluate the effectiveness of our approach, we applied Audee on testing four DL frameworks, i.e., TensorFlow, PyTorch, CNTK, and Theano. We generate a large number of DNNs which cover 25 widely-used APIs in the four frameworks. The results demonstrate that Audee is effective in detecting inconsistencies, crashes and NaN errors. In total, 26 unique unknown bugs were discovered, and 7 of them have already been confirmed or fixed by the developers. 2020-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7077 info:doi/10.1145/3324884.3416571 https://ink.library.smu.edu.sg/context/sis_research/article/8080/viewcontent/3324884.3416571.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Deep learning frameworks Deep learning testing Bug detection OS and Networks Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Deep learning frameworks
Deep learning testing
Bug detection
OS and Networks
Software Engineering
spellingShingle Deep learning frameworks
Deep learning testing
Bug detection
OS and Networks
Software Engineering
GUO, Qianyu
XIE, Xiaofei
LI, Yi
ZHANG, Xiaoyu
LIU, Yang
LI, Xiaohong
SHEN, Chao
Audee: Automated testing for deep learning frameworks
description Deep learning (DL) has been applied widely, and the quality of DL system becomes crucial, especially for safety-critical applications. Existing work mainly focuses on the quality analysis of DL models, but lacks attention to the underlying frameworks on which all DL models depend. In this work, we propose Audee, a novel approach for testing DL frameworks and localizing bugs. Audee adopts a search-based approach and implements three different mutation strategies to generate diverse test cases by exploring combinations of model structures, parameters, weights and inputs. Audee is able to detect three types of bugs: logical bugs, crashes and Not-a-Number (NaN) errors. In particular, for logical bugs, Audee adopts a cross-reference check to detect behavioural inconsistencies across multiple frameworks (e.g., TensorFlow and PyTorch), which may indicate potential bugs in their implementations. For NaN errors, Audee adopts a heuristic-based approach to generate DNNs that tend to output outliers (i.e., too large or small values), and these values are likely to produce NaN. Furthermore, Audee leverages a causal-testing based technique to localize layers as well as parameters that cause inconsistencies or bugs. To evaluate the effectiveness of our approach, we applied Audee on testing four DL frameworks, i.e., TensorFlow, PyTorch, CNTK, and Theano. We generate a large number of DNNs which cover 25 widely-used APIs in the four frameworks. The results demonstrate that Audee is effective in detecting inconsistencies, crashes and NaN errors. In total, 26 unique unknown bugs were discovered, and 7 of them have already been confirmed or fixed by the developers.
format text
author GUO, Qianyu
XIE, Xiaofei
LI, Yi
ZHANG, Xiaoyu
LIU, Yang
LI, Xiaohong
SHEN, Chao
author_facet GUO, Qianyu
XIE, Xiaofei
LI, Yi
ZHANG, Xiaoyu
LIU, Yang
LI, Xiaohong
SHEN, Chao
author_sort GUO, Qianyu
title Audee: Automated testing for deep learning frameworks
title_short Audee: Automated testing for deep learning frameworks
title_full Audee: Automated testing for deep learning frameworks
title_fullStr Audee: Automated testing for deep learning frameworks
title_full_unstemmed Audee: Automated testing for deep learning frameworks
title_sort audee: automated testing for deep learning frameworks
publisher Institutional Knowledge at Singapore Management University
publishDate 2020
url https://ink.library.smu.edu.sg/sis_research/7077
https://ink.library.smu.edu.sg/context/sis_research/article/8080/viewcontent/3324884.3416571.pdf
_version_ 1770576207540125696