Goten: GPU-outsourcing trusted execution of neural network training

Deep learning unlocks applications with societal impacts, e.g., detecting child exploitation imagery and genomic analy sis of rare diseases. Deployment, however, needs compliance with stringent privacy regulations. Training algorithms that preserve the privacy of training data are in pressing nee...

Full description

Saved in:
Bibliographic Details
Main Authors: Ng, Lucian K. L., Chow, Sherman S. M., Woo, Anna P. Y, Wong, Donald, P. H., Zhao, Yongjun
Other Authors: Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21)
Format: Conference or Workshop Item
Language:English
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/10356/157152
https://ojs.aaai.org/index.php/AAAI/issue/archive
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-157152
record_format dspace
spelling sg-ntu-dr.10356-1571522022-05-14T20:11:43Z Goten: GPU-outsourcing trusted execution of neural network training Ng, Lucian K. L. Chow, Sherman S. M. Woo, Anna P. Y Wong, Donald, P. H. Zhao, Yongjun Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21) Nanyang Technopreneurship Center Strategic Centre for Research in Privacy-Preserving Technologies & Systems (SCRIPTS) Engineering::Computer science and engineering Neural Network GPU-Outsourcing Deep learning unlocks applications with societal impacts, e.g., detecting child exploitation imagery and genomic analy sis of rare diseases. Deployment, however, needs compliance with stringent privacy regulations. Training algorithms that preserve the privacy of training data are in pressing need. Purely cryptographic approaches can protect privacy, but they are still costly, even when they rely on two or more non colluding servers. Seemingly-“trivial” operations in plain text quickly become prohibitively inefficient when a series of them are “crypto-processed,” e.g., (dynamic) quantization for ensuring the intermediate values would not overflow. Slalom, recently proposed by Tramer and Boneh, is the first ` solution that leverages both GPU (for efficient batch compu tation) and a trusted execution environment (TEE) (for min imizing the use of cryptography). Roughly, it works by a lot of pre-computation over known and fixed weights, and hence it only supports private inference. Five related problems for private training are left unaddressed. Goten, our privacy-preserving training and prediction frame work, tackles all five problems simultaneously via our care ful design over the “mismatched” cryptographic and GPU data types (due to the tension between precision and ef ficiency) and our round-optimal GPU-outsourcing protocol (hence minimizing the communication cost between servers). It 1) stochastically trains a low-bitwidth yet accurate model, 2) supports dynamic quantization (a challenge left by Slalom), 3) minimizes the memory-swapping overhead of the memory-limited TEE and its communication with GPU, 4) crypto-protects the (dynamic) model weight from untrusted GPU, and 5) outperforms a pure-TEE system, even without pre-computation (needed by Slalom). As a baseline, we build CaffeScone that secures Caffe using TEE but not GPU; Goten shows a 6.84× speed-up of the whole VGG-11. Goten also outperforms Falcon proposed by Wagh et al., the latest se cure multi-server cryptographic solution, by 132.64× using VGG-11. Lastly, we demonstrate Goten’s efficacy in training models for breast cancer diagnosis over sensitive images. Info-communications Media Development Authority (IMDA) National Research Foundation (NRF) Submitted/Accepted version This research is supported by the National Research Foundation, Singapore under its Strategic Capability Research Centres Funding Initiative. 2022-05-09T06:30:25Z 2022-05-09T06:30:25Z 2021 Conference Paper Ng, L. K. L., Chow, S. S. M., Woo, A. P. Y., Wong, D. P. H. & Zhao, Y. (2021). Goten: GPU-outsourcing trusted execution of neural network training. Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21), 35, 14876-14883. 978-1-57735-866-4 2159-5399 https://hdl.handle.net/10356/157152 https://ojs.aaai.org/index.php/AAAI/issue/archive 35 14876 14883 en © 2021 Association for the Advancement of Artificial Intelligence. All rights reserved. This paper was published in Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21) and is made available with permission of Association for the Advancement of Artificial Intelligence. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Neural Network
GPU-Outsourcing
spellingShingle Engineering::Computer science and engineering
Neural Network
GPU-Outsourcing
Ng, Lucian K. L.
Chow, Sherman S. M.
Woo, Anna P. Y
Wong, Donald, P. H.
Zhao, Yongjun
Goten: GPU-outsourcing trusted execution of neural network training
description Deep learning unlocks applications with societal impacts, e.g., detecting child exploitation imagery and genomic analy sis of rare diseases. Deployment, however, needs compliance with stringent privacy regulations. Training algorithms that preserve the privacy of training data are in pressing need. Purely cryptographic approaches can protect privacy, but they are still costly, even when they rely on two or more non colluding servers. Seemingly-“trivial” operations in plain text quickly become prohibitively inefficient when a series of them are “crypto-processed,” e.g., (dynamic) quantization for ensuring the intermediate values would not overflow. Slalom, recently proposed by Tramer and Boneh, is the first ` solution that leverages both GPU (for efficient batch compu tation) and a trusted execution environment (TEE) (for min imizing the use of cryptography). Roughly, it works by a lot of pre-computation over known and fixed weights, and hence it only supports private inference. Five related problems for private training are left unaddressed. Goten, our privacy-preserving training and prediction frame work, tackles all five problems simultaneously via our care ful design over the “mismatched” cryptographic and GPU data types (due to the tension between precision and ef ficiency) and our round-optimal GPU-outsourcing protocol (hence minimizing the communication cost between servers). It 1) stochastically trains a low-bitwidth yet accurate model, 2) supports dynamic quantization (a challenge left by Slalom), 3) minimizes the memory-swapping overhead of the memory-limited TEE and its communication with GPU, 4) crypto-protects the (dynamic) model weight from untrusted GPU, and 5) outperforms a pure-TEE system, even without pre-computation (needed by Slalom). As a baseline, we build CaffeScone that secures Caffe using TEE but not GPU; Goten shows a 6.84× speed-up of the whole VGG-11. Goten also outperforms Falcon proposed by Wagh et al., the latest se cure multi-server cryptographic solution, by 132.64× using VGG-11. Lastly, we demonstrate Goten’s efficacy in training models for breast cancer diagnosis over sensitive images.
author2 Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21)
author_facet Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21)
Ng, Lucian K. L.
Chow, Sherman S. M.
Woo, Anna P. Y
Wong, Donald, P. H.
Zhao, Yongjun
format Conference or Workshop Item
author Ng, Lucian K. L.
Chow, Sherman S. M.
Woo, Anna P. Y
Wong, Donald, P. H.
Zhao, Yongjun
author_sort Ng, Lucian K. L.
title Goten: GPU-outsourcing trusted execution of neural network training
title_short Goten: GPU-outsourcing trusted execution of neural network training
title_full Goten: GPU-outsourcing trusted execution of neural network training
title_fullStr Goten: GPU-outsourcing trusted execution of neural network training
title_full_unstemmed Goten: GPU-outsourcing trusted execution of neural network training
title_sort goten: gpu-outsourcing trusted execution of neural network training
publishDate 2022
url https://hdl.handle.net/10356/157152
https://ojs.aaai.org/index.php/AAAI/issue/archive
_version_ 1734310196120911872