Decompiling x86 Deep Neural Network executables

Due to their widespread use on heterogeneous hardware devices, deep learning (DL) models are compiled into executables by DL compilers to fully leverage low-level hardware primitives. This approach allows DL computations to be undertaken at low cost across a variety of computing platforms, including...

Full description

Saved in:
Bibliographic Details
Main Authors: LIU, Zhibo, YUAN, Yuanyuan, WANG, Shuai, XIE, Xiaofei, MA, Lei
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8216
https://ink.library.smu.edu.sg/context/sis_research/article/9219/viewcontent/sec23summer_406_liu_zhibo_av.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9219
record_format dspace
spelling sg-smu-ink.sis_research-92192023-10-26T00:53:24Z Decompiling x86 Deep Neural Network executables LIU, Zhibo YUAN, Yuanyuan WANG, Shuai XIE, Xiaofei MA, Lei Due to their widespread use on heterogeneous hardware devices, deep learning (DL) models are compiled into executables by DL compilers to fully leverage low-level hardware primitives. This approach allows DL computations to be undertaken at low cost across a variety of computing platforms, including CPUs, GPUs, and various hardware accelerators. We present BTD (Bin to DNN), a decompiler for deep neural network (DNN) executables. BTD takes DNN executables and outputs full model specifications, including types of DNN operators, network topology, dimensions, and parameters that are (nearly) identical to those of the input models. BTD delivers a practical framework to process DNN executables compiled by different DL compilers and with full optimizations enabled on x86 platforms. It employs learning-based techniques to infer DNN operators, dynamic analysis to reveal network architectures, and symbolic execution to facilitate inferring dimensions and parameters of DNN operators. Our evaluation reveals that BTD enables accurate recovery of full specifications of complex DNNs with millions of parameters (e.g. ResNet). The recovered DNN specifications can be re-compiled into a new DNN executable exhibiting identical behavior to the input executable. We show that BTD can boost two representative attacks, adversarial example generation and knowledge stealing, against DNN executables. We also demonstrate cross-architecture legacy code reuse using BTD, and envision BTD being used for other critical downstream tasks like DNN security hardening and patching. 2023-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8216 https://ink.library.smu.edu.sg/context/sis_research/article/9219/viewcontent/sec23summer_406_liu_zhibo_av.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Artificial Intelligence and Robotics Information Security Programming Languages and Compilers
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Artificial Intelligence and Robotics
Information Security
Programming Languages and Compilers
spellingShingle Artificial Intelligence and Robotics
Information Security
Programming Languages and Compilers
LIU, Zhibo
YUAN, Yuanyuan
WANG, Shuai
XIE, Xiaofei
MA, Lei
Decompiling x86 Deep Neural Network executables
description Due to their widespread use on heterogeneous hardware devices, deep learning (DL) models are compiled into executables by DL compilers to fully leverage low-level hardware primitives. This approach allows DL computations to be undertaken at low cost across a variety of computing platforms, including CPUs, GPUs, and various hardware accelerators. We present BTD (Bin to DNN), a decompiler for deep neural network (DNN) executables. BTD takes DNN executables and outputs full model specifications, including types of DNN operators, network topology, dimensions, and parameters that are (nearly) identical to those of the input models. BTD delivers a practical framework to process DNN executables compiled by different DL compilers and with full optimizations enabled on x86 platforms. It employs learning-based techniques to infer DNN operators, dynamic analysis to reveal network architectures, and symbolic execution to facilitate inferring dimensions and parameters of DNN operators. Our evaluation reveals that BTD enables accurate recovery of full specifications of complex DNNs with millions of parameters (e.g. ResNet). The recovered DNN specifications can be re-compiled into a new DNN executable exhibiting identical behavior to the input executable. We show that BTD can boost two representative attacks, adversarial example generation and knowledge stealing, against DNN executables. We also demonstrate cross-architecture legacy code reuse using BTD, and envision BTD being used for other critical downstream tasks like DNN security hardening and patching.
format text
author LIU, Zhibo
YUAN, Yuanyuan
WANG, Shuai
XIE, Xiaofei
MA, Lei
author_facet LIU, Zhibo
YUAN, Yuanyuan
WANG, Shuai
XIE, Xiaofei
MA, Lei
author_sort LIU, Zhibo
title Decompiling x86 Deep Neural Network executables
title_short Decompiling x86 Deep Neural Network executables
title_full Decompiling x86 Deep Neural Network executables
title_fullStr Decompiling x86 Deep Neural Network executables
title_full_unstemmed Decompiling x86 Deep Neural Network executables
title_sort decompiling x86 deep neural network executables
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8216
https://ink.library.smu.edu.sg/context/sis_research/article/9219/viewcontent/sec23summer_406_liu_zhibo_av.pdf
_version_ 1781793966981644288