A study of deep learning on many-core processors
Deep learning becomes a hot topic recently in various areas, from industry to academia. More and more applications developed attract a lot of public attention. Some problems in deep learning are still challenging research topics. With big data, training time is one of the major concerns to design...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
2016
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/66954 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Deep learning becomes a hot topic recently in various areas, from
industry to academia. More and more applications developed attract a lot of public attention. Some problems in deep learning are still challenging research topics. With big data, training time is one of the major concerns to design a deep network. Parallel processing seems to be a solution to reduce training time tremendously.
This project aims to investigate a few strategies to train a deep network in parallel on Apache Singa platform. The strategies consist training in synchronous mode, in asynchronous mode and in GPU. Although many factors determine the quality of the network design, in this project, training time is a major concern.
Training time of a deep network can be reduced to a very large extends by using GPU training, and achieve a small speedup by multi-process training on a single machine. A complete analysis may be done by also considering scalability factor and measure the performance in a cluster. |
---|