Distributed training for multi-layer neural networks by consensus
Over the past decade, there has been a growing interest in large-scale and privacy-concerned machine learning, especially in the situation where the data cannot be shared due to privacy protection or cannot be centralized due to computational limitations. Parallel computation has been proposed to ci...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/161253 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-161253 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1612532022-08-22T07:28:26Z Distributed training for multi-layer neural networks by consensus Liu, Bo Ding, Zhengtao Lv, Chen School of Mechanical and Aerospace Engineering Engineering::Mechanical engineering Engineering::Electrical and electronic engineering Backpropagation Consensus Over the past decade, there has been a growing interest in large-scale and privacy-concerned machine learning, especially in the situation where the data cannot be shared due to privacy protection or cannot be centralized due to computational limitations. Parallel computation has been proposed to circumvent these limitations, usually based on the master-slave and decentralized topologies, and the comparison study shows that a decentralized graph could avoid the possible communication jam on the central agent but incur extra communication cost. In this brief, a consensus algorithm is designed to allow all agents over the decentralized graph to converge to each other, and the distributed neural networks with enough consensus steps could have nearly the same performance as the centralized training model. Through the analysis of convergence, it is proved that all agents over an undirected graph could converge to the same optimal model even with only a single consensus step, and this can significantly reduce the communication cost. Simulation studies demonstrate that the proposed distributed training algorithm for multi-layer neural networks without data exchange could exhibit comparable or even better performance than the centralized training model. 2022-08-22T07:28:26Z 2022-08-22T07:28:26Z 2019 Journal Article Liu, B., Ding, Z. & Lv, C. (2019). Distributed training for multi-layer neural networks by consensus. IEEE Transactions On Neural Networks and Learning Systems, 31(5), 1771-1778. https://dx.doi.org/10.1109/TNNLS.2019.2921926 2162-237X https://hdl.handle.net/10356/161253 10.1109/TNNLS.2019.2921926 31265422 2-s2.0-85081545178 5 31 1771 1778 en IEEE Transactions on Neural Networks and Learning Systems © 2019 IEEE. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Mechanical engineering Engineering::Electrical and electronic engineering Backpropagation Consensus |
spellingShingle |
Engineering::Mechanical engineering Engineering::Electrical and electronic engineering Backpropagation Consensus Liu, Bo Ding, Zhengtao Lv, Chen Distributed training for multi-layer neural networks by consensus |
description |
Over the past decade, there has been a growing interest in large-scale and privacy-concerned machine learning, especially in the situation where the data cannot be shared due to privacy protection or cannot be centralized due to computational limitations. Parallel computation has been proposed to circumvent these limitations, usually based on the master-slave and decentralized topologies, and the comparison study shows that a decentralized graph could avoid the possible communication jam on the central agent but incur extra communication cost. In this brief, a consensus algorithm is designed to allow all agents over the decentralized graph to converge to each other, and the distributed neural networks with enough consensus steps could have nearly the same performance as the centralized training model. Through the analysis of convergence, it is proved that all agents over an undirected graph could converge to the same optimal model even with only a single consensus step, and this can significantly reduce the communication cost. Simulation studies demonstrate that the proposed distributed training algorithm for multi-layer neural networks without data exchange could exhibit comparable or even better performance than the centralized training model. |
author2 |
School of Mechanical and Aerospace Engineering |
author_facet |
School of Mechanical and Aerospace Engineering Liu, Bo Ding, Zhengtao Lv, Chen |
format |
Article |
author |
Liu, Bo Ding, Zhengtao Lv, Chen |
author_sort |
Liu, Bo |
title |
Distributed training for multi-layer neural networks by consensus |
title_short |
Distributed training for multi-layer neural networks by consensus |
title_full |
Distributed training for multi-layer neural networks by consensus |
title_fullStr |
Distributed training for multi-layer neural networks by consensus |
title_full_unstemmed |
Distributed training for multi-layer neural networks by consensus |
title_sort |
distributed training for multi-layer neural networks by consensus |
publishDate |
2022 |
url |
https://hdl.handle.net/10356/161253 |
_version_ |
1743119577530761216 |