Mercury: an automated remote side-channel attack to Nvidia deep learning accelerator
DNN accelerators have been widely deployed in many scenarios to speed up the inference process and reduce the energy consumption. One big concern about the usage of the accelerators is the confidentiality of the deployed models: model inference execution on the accelerators could leak side-channel i...
Saved in:
Main Authors: | , , , , , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/171839 https://fpt2023.org/index.html |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-171839 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1718392024-02-09T02:14:56Z Mercury: an automated remote side-channel attack to Nvidia deep learning accelerator Yan, Xiaobei Lou, Xiaoxuan Xu, Guowen Qiu, Han Guo, Shangwei Chang, Chip Hong Zhang, Tianwei School of Computer Science and Engineering 2023 International Conference on Field-Programmable Technology (ICFPT) Computer and Information Science Profiled Side-Channel Attacks DNN Accelerator Sequence-to-Sequence Learning FPGA Model Extraction DNN accelerators have been widely deployed in many scenarios to speed up the inference process and reduce the energy consumption. One big concern about the usage of the accelerators is the confidentiality of the deployed models: model inference execution on the accelerators could leak side-channel information, which enables an adversary to preciously recover the model details. Such model extraction attacks can not only compromise the intellectual property of DNN models, but also facilitate some adversarial attacks. Although previous works have demonstrated a number of side-channel techniques to extract models from DNN accelerators, they are not practical for two reasons. (1) They only target simplified accelerator implementations, which have limited practicality in the real world. (2) They require heavy human analysis and domain knowledge. To overcome these limitations, this paper presents Mercury, the first automated remote side-channel attack against the off-the-shelf Nvidia DNN accelerator. The key insight of Mercury is to model the side-channel extraction process as a sequence-to-sequence problem. The adversary can leverage a time-to-digital converter (TDC) to remotely collect the power trace of the target model's inference. Then he uses a learning model to automatically recover the architecture details of the victim model from the power trace without any prior knowledge. The adversary can further use the attention mechanism to localize the leakage points that contribute most to the attack. Evaluation results indicate that Mercury can keep the error rate of model extraction below 1%. Cyber Security Agency National Research Foundation (NRF) Submitted/Accepted version This research is supported by National Research Foundation, Singapore, and Cyber Security Agency of Singapore under its National Cybersecurity Research & Development Programme (Cyber-Hardware Forensic & Assurance Evaluation R&D Programme <NRF2018NCRNCR009-0001>), and MoE Tier 1 RS02/19. 2023-12-28T07:54:23Z 2023-12-28T07:54:23Z 2023 Conference Paper Yan, X., Lou, X., Xu, G., Qiu, H., Guo, S., Chang, C. H. & Zhang, T. (2023). Mercury: an automated remote side-channel attack to Nvidia deep learning accelerator. 2023 International Conference on Field-Programmable Technology (ICFPT), 188-197. https://dx.doi.org/10.1109/ICFPT59805.2023.00026 979-8-3503-5911-4 2837-0449 https://hdl.handle.net/10356/171839 10.1109/ICFPT59805.2023.00026 https://fpt2023.org/index.html 188 197 en NRF2018NCRNCR009-0001 RS02/19 © 2023 IEEE. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at http://doi.org/10.1109/ICFPT59805.2023.00026. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science Profiled Side-Channel Attacks DNN Accelerator Sequence-to-Sequence Learning FPGA Model Extraction |
spellingShingle |
Computer and Information Science Profiled Side-Channel Attacks DNN Accelerator Sequence-to-Sequence Learning FPGA Model Extraction Yan, Xiaobei Lou, Xiaoxuan Xu, Guowen Qiu, Han Guo, Shangwei Chang, Chip Hong Zhang, Tianwei Mercury: an automated remote side-channel attack to Nvidia deep learning accelerator |
description |
DNN accelerators have been widely deployed in many scenarios to speed up the inference process and reduce the energy consumption. One big concern about the usage of the accelerators is the confidentiality of the deployed models: model inference execution on the accelerators could leak side-channel information, which enables an adversary to preciously recover the model details. Such model extraction attacks can not only compromise the intellectual property of DNN models, but also facilitate some adversarial attacks.
Although previous works have demonstrated a number of side-channel techniques to extract models from DNN accelerators, they are not practical for two reasons. (1) They only target simplified accelerator implementations, which have limited practicality in the real world. (2) They require heavy human analysis and domain knowledge. To overcome these limitations, this paper presents Mercury, the first automated remote side-channel attack against the off-the-shelf Nvidia DNN accelerator. The key insight of Mercury is to model the side-channel extraction process as a sequence-to-sequence problem. The adversary can leverage a time-to-digital converter (TDC) to remotely collect the power trace of the target model's inference. Then he uses a learning model to automatically recover the architecture details of the victim model from the power trace without any prior knowledge. The adversary can further use the attention mechanism to localize the leakage points that contribute most to the attack. Evaluation results indicate that Mercury can keep the error rate of model extraction below 1%. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Yan, Xiaobei Lou, Xiaoxuan Xu, Guowen Qiu, Han Guo, Shangwei Chang, Chip Hong Zhang, Tianwei |
format |
Conference or Workshop Item |
author |
Yan, Xiaobei Lou, Xiaoxuan Xu, Guowen Qiu, Han Guo, Shangwei Chang, Chip Hong Zhang, Tianwei |
author_sort |
Yan, Xiaobei |
title |
Mercury: an automated remote side-channel attack to Nvidia deep learning accelerator |
title_short |
Mercury: an automated remote side-channel attack to Nvidia deep learning accelerator |
title_full |
Mercury: an automated remote side-channel attack to Nvidia deep learning accelerator |
title_fullStr |
Mercury: an automated remote side-channel attack to Nvidia deep learning accelerator |
title_full_unstemmed |
Mercury: an automated remote side-channel attack to Nvidia deep learning accelerator |
title_sort |
mercury: an automated remote side-channel attack to nvidia deep learning accelerator |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/171839 https://fpt2023.org/index.html |
_version_ |
1794549457282400256 |