Computer vision optimization on embedded GPU board
Computer vision tasks such as image classification have prevalent use and are greatly aided by the development of deep learning techniques, in particular CNN. Performing such tasks on specialized embedded GPU boards can have intriguing prospects in edge computing development. In this study, popular...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/156654 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-156654 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1566542022-04-22T01:41:24Z Computer vision optimization on embedded GPU board Li, Ziyang Vun Chan Hua, Nicholas School of Computer Science and Engineering ASCHVUN@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Computer vision tasks such as image classification have prevalent use and are greatly aided by the development of deep learning techniques, in particular CNN. Performing such tasks on specialized embedded GPU boards can have intriguing prospects in edge computing development. In this study, popular CNN model architectures including GoogLeNet, ResNet and VGG were implemented on the new Jetson Xavier NX Developer Kit. The models are implemented using different deep learning frameworks including PyTorch, TensorFlow and Caffe, the latter involving TensorRT, the Nvidia optimization tool for inference model. The model implementations were evaluated based on various metrics including timing and resource utilization and the results were compared. This study draws the conclusion that DL-based computer vision tasks are compute-bound even on more powerful GPU devices, and the choice of frameworks has a significant effect on the performance of the inference task. In particular, TensorRT produces very significant improvement in terms of inference timing, and scales well across model architecture and model depth. Bachelor of Engineering (Computer Engineering) 2022-04-22T01:41:24Z 2022-04-22T01:41:24Z 2022 Final Year Project (FYP) Li, Z. (2022). Computer vision optimization on embedded GPU board. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/156654 https://hdl.handle.net/10356/156654 en SCSE21-0325 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision |
spellingShingle |
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Li, Ziyang Computer vision optimization on embedded GPU board |
description |
Computer vision tasks such as image classification have prevalent use and are greatly aided by the development of deep learning techniques, in particular CNN. Performing such tasks on specialized embedded GPU boards can have intriguing prospects in edge computing development. In this study, popular CNN model architectures including GoogLeNet, ResNet and VGG were implemented on the new Jetson Xavier NX Developer Kit. The models are implemented using different deep learning frameworks including PyTorch, TensorFlow and Caffe, the latter involving TensorRT, the Nvidia optimization tool for inference model. The model implementations were evaluated based on various metrics including timing and resource utilization and the results were compared. This study draws the conclusion that DL-based computer vision tasks are compute-bound even on more powerful GPU devices, and the choice of frameworks has a significant effect on the performance of the inference task. In particular, TensorRT produces very significant improvement in terms of inference timing, and scales well across model architecture and model depth. |
author2 |
Vun Chan Hua, Nicholas |
author_facet |
Vun Chan Hua, Nicholas Li, Ziyang |
format |
Final Year Project |
author |
Li, Ziyang |
author_sort |
Li, Ziyang |
title |
Computer vision optimization on embedded GPU board |
title_short |
Computer vision optimization on embedded GPU board |
title_full |
Computer vision optimization on embedded GPU board |
title_fullStr |
Computer vision optimization on embedded GPU board |
title_full_unstemmed |
Computer vision optimization on embedded GPU board |
title_sort |
computer vision optimization on embedded gpu board |
publisher |
Nanyang Technological University |
publishDate |
2022 |
url |
https://hdl.handle.net/10356/156654 |
_version_ |
1731235751999307776 |