Efficient asynchronous multi-participant vertical federated learning
Vertical Federated Learning (VFL) is a private-preserving distributed machine learning paradigm that collaboratively trains machine learning models with participants whose local data overlap largely in the sample space, but not so in the feature space. Existing VFL methods are mainly based on synchr...
Saved in:
Main Authors: | , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/179059 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-179059 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1790592024-07-18T03:23:08Z Efficient asynchronous multi-participant vertical federated learning Shi, Haoran Xu, Yonghui Jiang, Yali Yu, Han Cui, Lizhen College of Computing and Data Science School of Computer Science and Engineering Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR) Computer and Information Science Artificial intelligence Federated learning Vertical Federated Learning (VFL) is a private-preserving distributed machine learning paradigm that collaboratively trains machine learning models with participants whose local data overlap largely in the sample space, but not so in the feature space. Existing VFL methods are mainly based on synchronous computation and homomorphic encryption (HE). Due to the differences in the communication and computation resources of the participants, straggling participants can cause delays during synchronous VFL model training, resulting in low computational efficiency. In addition, HE incurs high computation and communication costs. Moreover, it is difficult to establish a VFL coordinator (a.k.a. server) that all participants can trust. To address these problems, we propose an efficient Asynchronous Multi-participant Vertical Federated Learning method (AMVFL). AMVFL leverages asynchronous training which reduces waiting time. At the same time, secret sharing is used instead of HE for privacy protection, which further reduces the computational cost. In addition, AMVFL does not require a trusted entity to serve as the VFL coordinator. Experimental results based on real-world and synthetic datasets demonstrate that AMVFL can significantly reduce computational cost and improve the accuracy of the model compared to five state-of-the-art VFL methods. Agency for Science, Technology and Research (A*STAR) AI Singapore Nanyang Technological University National Research Foundation (NRF) Submitted/Accepted version This work is supported, in part, by the NSFC No. b91846205; National Key R&D Program of China No. b2021YFF0900800; SDNSFC No. bZR2019LZH008; Shandong Provincial Key Research and Development Program (Major Scientific and Technological Innovation Project) (No. b2021CXGC010108); the Fundamental Research Funds of Shandong University; the Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR) (NSC-2019-011); the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2- RP-2020-019); the Joint NTU-WeBank Research Centre on Fintech, Nanyang Technological University, Singapore; the RIE 2020 Advanced Manufacturing and Engineering (AME) Programmatic Fund (No. A20G8b0102), Singapore; the Nanyang Assistant Professorship (NAP); and Future Communications Research & Development Programme (FCP-NTU-RG-2021-014). 2024-07-18T00:55:12Z 2024-07-18T00:55:12Z 2022 Journal Article Shi, H., Xu, Y., Jiang, Y., Yu, H. & Cui, L. (2022). Efficient asynchronous multi-participant vertical federated learning. IEEE Transactions On Big Data. https://dx.doi.org/10.1109/TBDATA.2022.3201729 2332-7790 https://hdl.handle.net/10356/179059 10.1109/TBDATA.2022.3201729 en AISG2- RP-2020-019 A20G8b0102 FCP-NTU-RG-2021-014 NSC-2019-011 IEEE Transactions on Big Data © 2022 IEEE. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at http://doi.org/10.1109/TBDATA.2022.3201729. . application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science Artificial intelligence Federated learning |
spellingShingle |
Computer and Information Science Artificial intelligence Federated learning Shi, Haoran Xu, Yonghui Jiang, Yali Yu, Han Cui, Lizhen Efficient asynchronous multi-participant vertical federated learning |
description |
Vertical Federated Learning (VFL) is a private-preserving distributed machine learning paradigm that collaboratively trains machine learning models with participants whose local data overlap largely in the sample space, but not so in the feature space. Existing VFL methods are mainly based on synchronous computation and homomorphic encryption (HE). Due to the differences in the communication and computation resources of the participants, straggling participants can cause delays during synchronous VFL model training, resulting in low computational efficiency. In addition, HE incurs high computation and communication costs. Moreover, it is difficult to establish a VFL coordinator (a.k.a. server) that all participants can trust. To address these problems, we propose an efficient Asynchronous Multi-participant Vertical Federated Learning method (AMVFL). AMVFL leverages asynchronous training which reduces waiting time. At the same time, secret sharing is used instead of HE for privacy protection, which further reduces the computational cost. In addition, AMVFL does not require a trusted entity to serve as the VFL coordinator. Experimental results based on real-world and synthetic datasets demonstrate that AMVFL can significantly reduce computational cost and improve the accuracy of the model compared to five state-of-the-art VFL methods. |
author2 |
College of Computing and Data Science |
author_facet |
College of Computing and Data Science Shi, Haoran Xu, Yonghui Jiang, Yali Yu, Han Cui, Lizhen |
format |
Article |
author |
Shi, Haoran Xu, Yonghui Jiang, Yali Yu, Han Cui, Lizhen |
author_sort |
Shi, Haoran |
title |
Efficient asynchronous multi-participant vertical federated learning |
title_short |
Efficient asynchronous multi-participant vertical federated learning |
title_full |
Efficient asynchronous multi-participant vertical federated learning |
title_fullStr |
Efficient asynchronous multi-participant vertical federated learning |
title_full_unstemmed |
Efficient asynchronous multi-participant vertical federated learning |
title_sort |
efficient asynchronous multi-participant vertical federated learning |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/179059 |
_version_ |
1806059758208155648 |