Byzantine-resilient decentralized stochastic gradient descent
Decentralized learning has gained great popularity to improve learning efficiency and preserve data privacy. Each computing node makes equal contribution to collaboratively learn a Deep Learning model. The elimination of centralized Parameter Servers (PS) can effectively address many issues such as...
Saved in:
Main Authors: | Guo, Shangwei, Zhang, Tianwei, Yu, Han, Xie, Xiaofei, Ma, Lei, Xiang, Tao, Liu, Yang |
---|---|
Other Authors: | College of Computing and Data Science |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/179057 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Byzantine-resilient decentralized stochastic gradient descent
by: GUO, Shangwei, et al.
Published: (2022) -
Efficient and privacy-preserving feature importance-based vertical federated learning
by: Li, Anran, et al.
Published: (2024) -
Securing federated learning: a covert communication-based approach
by: Xie, Yuan-Ai, et al.
Published: (2024) -
Practical attribute reconstruction attack against federated learning
by: Chen, Chen, et al.
Published: (2024) -
Efficient asynchronous multi-participant vertical federated learning
by: Shi, Haoran, et al.
Published: (2024)