Study on attacks against federated learning

With the rise of artificial intelligence, the need for data also increases. However, many strict data privacy laws were put in place to protect personal data from being leaked. Therefore, this greatly limited the usage of artificial intelligence. Federated learning is a new form of collaborative...

全面介紹

Saved in:
書目詳細資料
主要作者: Guo, Feiyan
其他作者: Yeo Chai Kiat
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2022
主題:
在線閱讀:https://hdl.handle.net/10356/163119
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:With the rise of artificial intelligence, the need for data also increases. However, many strict data privacy laws were put in place to protect personal data from being leaked. Therefore, this greatly limited the usage of artificial intelligence. Federated learning is a new form of collaborative machine learning that leverages on decentralized data for training models. This introduces the possibility of being exposed to poisoned data from malicious participants. In this project, the author explores different attack and defence methodologies to get a better understanding of how federated learning works. The focus is on the coordinated backdoor attack with model-dependant triggers for attack methodology and robust learning rates for defence methodology. The defence methodology is implemented into an opensourced federated learning base code. This will allow federated learning to be more widely used since it is less likely to be compromised by malicious attackers in the presence of built-in defences.