An empirical study towards characterizing deep learning development and deployment across different frameworks and platforms
Deep Learning (DL) has recently achieved tremendous success. A variety of DL frameworks and platforms play a key role to catalyze such progress. However, the differences in architecture designs and implementations of existing frameworks and platforms bring new challenges for DL software development...
Saved in:
Main Authors: | , , , , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2019
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/7069 https://ink.library.smu.edu.sg/context/sis_research/article/8072/viewcontent/ASE.2019.00080.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-8072 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-80722022-04-07T08:16:32Z An empirical study towards characterizing deep learning development and deployment across different frameworks and platforms GUO, Qianyu CHEN, Sen XIE, Xiaofei MA, Lei HU, Qiang LIU, Hongtao LIU, Yang ZHAO, Jianjun LI, Xiaohong Deep Learning (DL) has recently achieved tremendous success. A variety of DL frameworks and platforms play a key role to catalyze such progress. However, the differences in architecture designs and implementations of existing frameworks and platforms bring new challenges for DL software development and deployment. Till now, there is no study on how various mainstream frameworks and platforms influence both DL software development and deployment in practice.To fill this gap, we take the first step towards understanding how the most widely-used DL frameworks and platforms support the DL software development and deployment. We conduct a systematic study on these frameworks and platforms by using two types of DNN architectures and three popular datasets. (1) For development process, we investigate the prediction accuracy under the same runtime training configuration or same model weights/biases. We also study the adversarial robustness of trained models by leveraging the existing adversarial attack techniques. The experimental results show that the computing differences across frameworks could result in an obvious prediction accuracy decline, which should draw the attention of DL developers. (2) For deployment process, we investigate the prediction accuracy and performance (refers to time cost and memory consumption) when the trained models are migrated/quantized from PC to real mobile devices and web browsers. The DL platform study unveils that the migration and quantization still suffer from compatibility and reliability issues. Meanwhile, we find several DL software bugs by using the results as a benchmark. We further validate the results through bug confirmation from stakeholders and industrial positive feedback to highlight the implications of our study. Through our study, we summarize practical guidelines, identify challenges and pinpoint new research directions, such as understanding the characteristics of DL frameworks and platforms, avoiding compatibility and reliability issues, detecting DL software bugs, and reducing time cost and memory consumption towards developing and deploying high quality DL systems effectively. 2019-11-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7069 info:doi/10.1109/ASE.2019.00080 https://ink.library.smu.edu.sg/context/sis_research/article/8072/viewcontent/ASE.2019.00080.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Deep learning frameworks Deep learning platforms Deep learning deployment Empirical study Software Engineering |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Deep learning frameworks Deep learning platforms Deep learning deployment Empirical study Software Engineering |
spellingShingle |
Deep learning frameworks Deep learning platforms Deep learning deployment Empirical study Software Engineering GUO, Qianyu CHEN, Sen XIE, Xiaofei MA, Lei HU, Qiang LIU, Hongtao LIU, Yang ZHAO, Jianjun LI, Xiaohong An empirical study towards characterizing deep learning development and deployment across different frameworks and platforms |
description |
Deep Learning (DL) has recently achieved tremendous success. A variety of DL frameworks and platforms play a key role to catalyze such progress. However, the differences in architecture designs and implementations of existing frameworks and platforms bring new challenges for DL software development and deployment. Till now, there is no study on how various mainstream frameworks and platforms influence both DL software development and deployment in practice.To fill this gap, we take the first step towards understanding how the most widely-used DL frameworks and platforms support the DL software development and deployment. We conduct a systematic study on these frameworks and platforms by using two types of DNN architectures and three popular datasets. (1) For development process, we investigate the prediction accuracy under the same runtime training configuration or same model weights/biases. We also study the adversarial robustness of trained models by leveraging the existing adversarial attack techniques. The experimental results show that the computing differences across frameworks could result in an obvious prediction accuracy decline, which should draw the attention of DL developers. (2) For deployment process, we investigate the prediction accuracy and performance (refers to time cost and memory consumption) when the trained models are migrated/quantized from PC to real mobile devices and web browsers. The DL platform study unveils that the migration and quantization still suffer from compatibility and reliability issues. Meanwhile, we find several DL software bugs by using the results as a benchmark. We further validate the results through bug confirmation from stakeholders and industrial positive feedback to highlight the implications of our study. Through our study, we summarize practical guidelines, identify challenges and pinpoint new research directions, such as understanding the characteristics of DL frameworks and platforms, avoiding compatibility and reliability issues, detecting DL software bugs, and reducing time cost and memory consumption towards developing and deploying high quality DL systems effectively. |
format |
text |
author |
GUO, Qianyu CHEN, Sen XIE, Xiaofei MA, Lei HU, Qiang LIU, Hongtao LIU, Yang ZHAO, Jianjun LI, Xiaohong |
author_facet |
GUO, Qianyu CHEN, Sen XIE, Xiaofei MA, Lei HU, Qiang LIU, Hongtao LIU, Yang ZHAO, Jianjun LI, Xiaohong |
author_sort |
GUO, Qianyu |
title |
An empirical study towards characterizing deep learning development and deployment across different frameworks and platforms |
title_short |
An empirical study towards characterizing deep learning development and deployment across different frameworks and platforms |
title_full |
An empirical study towards characterizing deep learning development and deployment across different frameworks and platforms |
title_fullStr |
An empirical study towards characterizing deep learning development and deployment across different frameworks and platforms |
title_full_unstemmed |
An empirical study towards characterizing deep learning development and deployment across different frameworks and platforms |
title_sort |
empirical study towards characterizing deep learning development and deployment across different frameworks and platforms |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2019 |
url |
https://ink.library.smu.edu.sg/sis_research/7069 https://ink.library.smu.edu.sg/context/sis_research/article/8072/viewcontent/ASE.2019.00080.pdf |
_version_ |
1770576198637715456 |