Distributed machine learning on public clouds
Machine learning (ML) aims to construct predictive models from example input data. Conventional ML systems like Caffe could have acceptable model training time on a single machine when dealing with a moderate amount of data. However, they may not be able to cope with very large training data sets, s...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
2019
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/76892 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Machine learning (ML) aims to construct predictive models from example input data. Conventional ML systems like Caffe could have acceptable model training time on a single machine when dealing with a moderate amount of data. However, they may not be able to cope with very large training data sets, such as ImageNet and Yahoo News Feed, which could have hundreds of millions of records. Several distributed ML systems have been proposed to reduce model training time. However, the behaviors of these systems on heterogeneous infrastructures such as public cloud infrastructures, e.g., Amazon EC2, Google GCE or Windows Azure, have not been thoroughly investigated. In this project, we will examine the performance of popular distributed ML systems such as Distributed Tensorflow and Horovod on Amazon Web Services. |
---|