PipeSC: a split computing framework for pipeline implementations considering independent input batch sizes
Split computing has gained attention in deep learning as a scheme for edge computing. Split computing splits a model into head and tail models. The head model is executed on the local device and its output sent to the edge server. This output forms the input to the tail model that resides on the edg...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/181324 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-181324 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1813242024-11-25T07:54:47Z PipeSC: a split computing framework for pipeline implementations considering independent input batch sizes Zhu, Zhentao Tay Wee Peng School of Electrical and Electronic Engineering wptay@ntu.edu.sg Engineering Split computing has gained attention in deep learning as a scheme for edge computing. Split computing splits a model into head and tail models. The head model is executed on the local device and its output sent to the edge server. This output forms the input to the tail model that resides on the edge server. Compared to traditional edge computing, split computing can fully utilise the resources of both local devices and edge servers. Meanwhile, split computing can significantly reduce the communication overhead as well as enhance the privacy during data transmission. Existing research has given less consideration to the implementation of split computing. This dissertation proposes a framework called pipelined split computing (PipeSC). It can dynamically select the appropriate split point according to local device's computational resouces and communication conditions and construct the optimal pipelined reasoning. The framework is designed with a pipeline in which the input batch size on the local device and the input batch size on the edge server are independent. Our numerical experiments demonstrate, we can show that PipeSC has less latency compared to traditional serial split computing. It is also verified that the application of independent batch sizes is effective. Master's degree 2024-11-25T07:54:46Z 2024-11-25T07:54:46Z 2024 Thesis-Master by Coursework Zhu, Z. (2024). PipeSC: a split computing framework for pipeline implementations considering independent input batch sizes. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/181324 https://hdl.handle.net/10356/181324 en application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering |
spellingShingle |
Engineering Zhu, Zhentao PipeSC: a split computing framework for pipeline implementations considering independent input batch sizes |
description |
Split computing has gained attention in deep learning as a scheme for edge computing. Split computing splits a model into head and tail models. The head model is executed on the local device and its output sent to the edge server. This output forms the input to the tail model that resides on the edge server. Compared to traditional edge computing, split computing can fully utilise the resources of both local devices and edge servers. Meanwhile, split computing can significantly reduce the communication overhead as well as enhance the privacy during data transmission.
Existing research has given less consideration to the implementation of split computing. This dissertation proposes a framework called pipelined split computing (PipeSC). It can dynamically select the appropriate split point according to local device's computational resouces and communication conditions and construct the optimal pipelined reasoning. The framework is designed with a pipeline in which the input batch size on the local device and the input batch size on the edge server are independent. Our numerical experiments demonstrate, we can show that PipeSC has less latency compared to traditional serial split computing. It is also verified that the application of independent batch sizes is effective. |
author2 |
Tay Wee Peng |
author_facet |
Tay Wee Peng Zhu, Zhentao |
format |
Thesis-Master by Coursework |
author |
Zhu, Zhentao |
author_sort |
Zhu, Zhentao |
title |
PipeSC: a split computing framework for pipeline implementations considering independent input batch sizes |
title_short |
PipeSC: a split computing framework for pipeline implementations considering independent input batch sizes |
title_full |
PipeSC: a split computing framework for pipeline implementations considering independent input batch sizes |
title_fullStr |
PipeSC: a split computing framework for pipeline implementations considering independent input batch sizes |
title_full_unstemmed |
PipeSC: a split computing framework for pipeline implementations considering independent input batch sizes |
title_sort |
pipesc: a split computing framework for pipeline implementations considering independent input batch sizes |
publisher |
Nanyang Technological University |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/181324 |
_version_ |
1816859068396994560 |