Benchmarking of the popular DL Frameworks over multiple GPU cards on state-of-the-art CNN architectures

Neural networks get more difficult and longer time to train if the depth become deeper. As deep neural network going deeper, it has dominated mostly all the pattern recognition algorithm and application, especially on Natural Language Processing and computer vision. To train a deep neural network, i...

Full description

Saved in:
Bibliographic Details
Main Author: Kow, Li Ren
Other Authors: Jiang Xudong
Format: Final Year Project
Language:English
Published: 2018
Subjects:
Online Access:http://hdl.handle.net/10356/74869
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Neural networks get more difficult and longer time to train if the depth become deeper. As deep neural network going deeper, it has dominated mostly all the pattern recognition algorithm and application, especially on Natural Language Processing and computer vision. To train a deep neural network, it involves a lot of floating point matrix calculation and it will be time consuming training on a computer processing unit (CPU). Even graphic processing unit (GPU) can do better in floating point calculation but it still takes long time to complete the training if the dataset is large and models are deep. Hence, multiple GPU card could be used in parallel to accelerate the entire training process. It is important to understand how fast it can be with different kind of deep learning framework which include (Mxnet, Pytorch and Caffe2) and the key software and hardware factor in this parallel training process on a single node or multi node configuration.