Personalised federated learning with differential privacy and gradient selection

The fast-emerging field of federated learning holds the promise of allowing clients to contribute to a central machine learning model without the need to send their data to a central server, thus providing privacy for their data. Two issues arise: dealing with statistical heterogeneity in datasets,...

Full description

Saved in:
Bibliographic Details
Main Authors: Lee, Jason Zhi Xin, Ng, Kai Chin, Toh, Arnold Xuan Ming
Other Authors: Lam Kwok Yan
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/151517
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:The fast-emerging field of federated learning holds the promise of allowing clients to contribute to a central machine learning model without the need to send their data to a central server, thus providing privacy for their data. Two issues arise: dealing with statistical heterogeneity in datasets, which is often the case in real-world settings, and obtaining stricter data privacy through employing privacy-preserving mechanisms. In this paper, we use personalized layers in a federated Convolutional Neural Network model to address statistical heterogeneity and use differential privacy to provide mathematically rigorous privacy guarantees for the federated learning model. We also propose a gradient selection technique to increase model performance. We developed a framework combining these techniques and experimentally demonstrate the effectiveness of the proposed framework on a dataset in improving model performance whilst maintaining a reasonable level of privacy guarantee and training efficiency.