Privacy and robustness in federated learning: attacks and defenses

As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative...

Full description

Saved in:
Bibliographic Details
Main Authors: Lyu, Lingjuan, Yu, Han, Ma, Xingjun, Chen, Chen, Sun, Lichao, Zhao, Jun, Yang, Qiang, Yu, Philip S.
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/164531
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-164531
record_format dspace
spelling sg-ntu-dr.10356-1645312023-01-31T02:43:39Z Privacy and robustness in federated learning: attacks and defenses Lyu, Lingjuan Yu, Han Ma, Xingjun Chen, Chen Sun, Lichao Zhao, Jun Yang, Qiang Yu, Philip S. School of Computer Science and Engineering Engineering::Computer science and engineering Federated Learning Privacy As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continues to thrive in this new reality. Existing FL protocol designs have been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this article, we conduct a comprehensive survey on privacy and robustness in FL over the past five years. Through a concise introduction to the concept of FL and a unique taxonomy covering: 1) threat models; 2) privacy attacks and defenses; and 3) poisoning attacks and defenses, we provide an accessible review of this important topic. We highlight the intuitions, key techniques, and fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions toward robust and privacy-preserving FL, and their interplays with the multidisciplinary goals of FL. Nanyang Technological University National Research Foundation (NRF) This work was supported in part by Sony AI; in part by the Joint NTU-WeBank Research Centre on Fintech under Award NWJ-2020-008; in part by Nanyang Technological University, Singapore; in part by the Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR) under Grant NSC-2019-011; in part by the National Research Foundation, Singapore, under its AI Singapore Programme, under AISG Award AISG2-RP-2020-019; in part by the RIE 2020 Advanced Manufacturing and Engineering (AME) Programmatic Fund, Singapore, under Grant A20G8b0102; in part by Nanyang Technological University through the Nanyang Assistant Professorship (NAP); and in part by the Future Communications Research & Development Programme under Grant FCPNTU-RG-2021-014. The work of Qiang Yang was supported in part by the Hong Kong RGC Theme-Based Research Scheme under Grant T41- 603/20-R. The work of Philip S. Yu was supported in part by NSF under Grant III-1763325, Grant III-1909323, Grant III-2106758, and Grant SaTC1930941. 2023-01-31T02:43:39Z 2023-01-31T02:43:39Z 2022 Journal Article Lyu, L., Yu, H., Ma, X., Chen, C., Sun, L., Zhao, J., Yang, Q. & Yu, P. S. (2022). Privacy and robustness in federated learning: attacks and defenses. IEEE Transactions On Neural Networks and Learning Systems, PP, 1-21. https://dx.doi.org/10.1109/TNNLS.2022.3216981 2162-237X https://hdl.handle.net/10356/164531 10.1109/TNNLS.2022.3216981 36355741 2-s2.0-85141616171 PP 1 21 en NWJ-2020-008 NSC-2019-011 AISG2-RP-2020-019 A20G8b0102 NTU NAP FCPNTU-RG-2021-014 IEEE Transactions on Neural Networks and Learning Systems © 2022 IEEE. All rights reserved.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Federated Learning
Privacy
spellingShingle Engineering::Computer science and engineering
Federated Learning
Privacy
Lyu, Lingjuan
Yu, Han
Ma, Xingjun
Chen, Chen
Sun, Lichao
Zhao, Jun
Yang, Qiang
Yu, Philip S.
Privacy and robustness in federated learning: attacks and defenses
description As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continues to thrive in this new reality. Existing FL protocol designs have been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this article, we conduct a comprehensive survey on privacy and robustness in FL over the past five years. Through a concise introduction to the concept of FL and a unique taxonomy covering: 1) threat models; 2) privacy attacks and defenses; and 3) poisoning attacks and defenses, we provide an accessible review of this important topic. We highlight the intuitions, key techniques, and fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions toward robust and privacy-preserving FL, and their interplays with the multidisciplinary goals of FL.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Lyu, Lingjuan
Yu, Han
Ma, Xingjun
Chen, Chen
Sun, Lichao
Zhao, Jun
Yang, Qiang
Yu, Philip S.
format Article
author Lyu, Lingjuan
Yu, Han
Ma, Xingjun
Chen, Chen
Sun, Lichao
Zhao, Jun
Yang, Qiang
Yu, Philip S.
author_sort Lyu, Lingjuan
title Privacy and robustness in federated learning: attacks and defenses
title_short Privacy and robustness in federated learning: attacks and defenses
title_full Privacy and robustness in federated learning: attacks and defenses
title_fullStr Privacy and robustness in federated learning: attacks and defenses
title_full_unstemmed Privacy and robustness in federated learning: attacks and defenses
title_sort privacy and robustness in federated learning: attacks and defenses
publishDate 2023
url https://hdl.handle.net/10356/164531
_version_ 1757048210890686464