Study of attacks on federated learning
In today’s era, people are becoming more aware of the data privacy issues that traditional centralised machine learning can cause while bringing convenience to every day’s lives. To tackle the problem, Federated Learning becomes an emerging alternative for distributed training of large scale deep ne...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/154018 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | In today’s era, people are becoming more aware of the data privacy issues that traditional centralised machine learning can cause while bringing convenience to every day’s lives. To tackle the problem, Federated Learning becomes an emerging alternative for distributed training of large scale deep neural networks as model updates are shared with a central server. However, this decentralised form of machine learning gives rise to new security threats by potentially malicious participants. This project will study a targeted data poisoning attack against Federated Learning, also known as label flipping attack. The attack aims to poison the global model by sending model updates from misclassified datasets. The project looks at the various factors that determine the impact of the attack on the global model. It starts with demonstrating how the attack causes substantial drops in the classification accuracy and class recall, even with a small percentage of malicious participants. The project then progresses to studying the impact of targeting multiple classes compared to a single class. Finally, the longevity of attack in early or late round training and malicious participant availability are studied before determining the relationship between the two. A defence strategy is proposed by identifying the malicious participants who sent model updates, causing dissimilar gradients. |
---|