PipeSC: a split computing framework for pipeline implementations considering independent input batch sizes
Split computing has gained attention in deep learning as a scheme for edge computing. Split computing splits a model into head and tail models. The head model is executed on the local device and its output sent to the edge server. This output forms the input to the tail model that resides on the edg...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/181324 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Split computing has gained attention in deep learning as a scheme for edge computing. Split computing splits a model into head and tail models. The head model is executed on the local device and its output sent to the edge server. This output forms the input to the tail model that resides on the edge server. Compared to traditional edge computing, split computing can fully utilise the resources of both local devices and edge servers. Meanwhile, split computing can significantly reduce the communication overhead as well as enhance the privacy during data transmission.
Existing research has given less consideration to the implementation of split computing. This dissertation proposes a framework called pipelined split computing (PipeSC). It can dynamically select the appropriate split point according to local device's computational resouces and communication conditions and construct the optimal pipelined reasoning. The framework is designed with a pipeline in which the input batch size on the local device and the input batch size on the edge server are independent. Our numerical experiments demonstrate, we can show that PipeSC has less latency compared to traditional serial split computing. It is also verified that the application of independent batch sizes is effective. |
---|