Study of attacks on federated learning

In today’s era, people are becoming more aware of the data privacy issues that traditional centralised machine learning can cause while bringing convenience to every day’s lives. To tackle the problem, Federated Learning becomes an emerging alternative for distributed training of large scale deep ne...

Full description

Saved in:
Bibliographic Details
Main Author: Thung, Jia Cheng
Other Authors: Yeo Chai Kiat
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/154018
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-154018
record_format dspace
spelling sg-ntu-dr.10356-1540182021-12-17T01:46:50Z Study of attacks on federated learning Thung, Jia Cheng Yeo Chai Kiat School of Computer Science and Engineering ASCKYEO@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence In today’s era, people are becoming more aware of the data privacy issues that traditional centralised machine learning can cause while bringing convenience to every day’s lives. To tackle the problem, Federated Learning becomes an emerging alternative for distributed training of large scale deep neural networks as model updates are shared with a central server. However, this decentralised form of machine learning gives rise to new security threats by potentially malicious participants. This project will study a targeted data poisoning attack against Federated Learning, also known as label flipping attack. The attack aims to poison the global model by sending model updates from misclassified datasets. The project looks at the various factors that determine the impact of the attack on the global model. It starts with demonstrating how the attack causes substantial drops in the classification accuracy and class recall, even with a small percentage of malicious participants. The project then progresses to studying the impact of targeting multiple classes compared to a single class. Finally, the longevity of attack in early or late round training and malicious participant availability are studied before determining the relationship between the two. A defence strategy is proposed by identifying the malicious participants who sent model updates, causing dissimilar gradients. Bachelor of Engineering (Computer Science) 2021-12-17T01:46:49Z 2021-12-17T01:46:49Z 2021 Final Year Project (FYP) Thung, J. C. (2021). Study of attacks on federated learning. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/154018 https://hdl.handle.net/10356/154018 en SCSE20-0799 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Thung, Jia Cheng
Study of attacks on federated learning
description In today’s era, people are becoming more aware of the data privacy issues that traditional centralised machine learning can cause while bringing convenience to every day’s lives. To tackle the problem, Federated Learning becomes an emerging alternative for distributed training of large scale deep neural networks as model updates are shared with a central server. However, this decentralised form of machine learning gives rise to new security threats by potentially malicious participants. This project will study a targeted data poisoning attack against Federated Learning, also known as label flipping attack. The attack aims to poison the global model by sending model updates from misclassified datasets. The project looks at the various factors that determine the impact of the attack on the global model. It starts with demonstrating how the attack causes substantial drops in the classification accuracy and class recall, even with a small percentage of malicious participants. The project then progresses to studying the impact of targeting multiple classes compared to a single class. Finally, the longevity of attack in early or late round training and malicious participant availability are studied before determining the relationship between the two. A defence strategy is proposed by identifying the malicious participants who sent model updates, causing dissimilar gradients.
author2 Yeo Chai Kiat
author_facet Yeo Chai Kiat
Thung, Jia Cheng
format Final Year Project
author Thung, Jia Cheng
author_sort Thung, Jia Cheng
title Study of attacks on federated learning
title_short Study of attacks on federated learning
title_full Study of attacks on federated learning
title_fullStr Study of attacks on federated learning
title_full_unstemmed Study of attacks on federated learning
title_sort study of attacks on federated learning
publisher Nanyang Technological University
publishDate 2021
url https://hdl.handle.net/10356/154018
_version_ 1720447077231624192