A-SDLM: an asynchronous Stochastic Learning Algorithm for fast distributed learning
We propose an asynchronous version of stochastic secondorder optimization algorithm for parallel distributed learning. Our proposed algorithm, namely Asynchronous Stochastic Diagonal Levenberg-Marquardt (A-SDLM) contains only a single hyper-parameter (i.e. the learning rate) while still retaining it...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference or Workshop Item |
Published: |
2015
|
Subjects: | |
Online Access: | http://eprints.utm.my/id/eprint/59161/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universiti Teknologi Malaysia |
Summary: | We propose an asynchronous version of stochastic secondorder optimization algorithm for parallel distributed learning. Our proposed algorithm, namely Asynchronous Stochastic Diagonal Levenberg-Marquardt (A-SDLM) contains only a single hyper-parameter (i.e. the learning rate) while still retaining its second-order properties. We also present a machine learning framework for neural network learning to show the effectiveness of proposed algorithm. The framework includes additional learning procedures which can contribute to better learning performance as well. Our framework is derived from peer worker thread model, and is designed based on data parallelism approach. The framework has been implemented using multi-threaded programming. Our experiments have successfully shown the potentials of applying a second-order learning algorithm on distributed learning to achieve better training speedup and higher accuracy compared to traditional SGD. |
---|