Cost of federated learning
Federated Learning (FL) is term coined by Google in 2016 at a time when misuse of personal data was gaining global traction. The main goal was to build a machine learning (ML) model based on datasets that are distributed across the world, all whilst retraining personal privacy. FL enables edge devic...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/163344 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Federated Learning (FL) is term coined by Google in 2016 at a time when misuse of personal data was gaining global traction. The main goal was to build a machine learning (ML) model based on datasets that are distributed across the world, all whilst retraining personal privacy. FL enables edge devices such as mobile phone or any Internet of Things (IoT) devices to collaboratively learn a shared prediction model while keeping all its training data intact and privately held.
Traditional machine learning is usually centrally trained with a full distinct view of the entire dataset. But this will usually cause privacy issue when personal data are being transferred to the central server for ML training. Thus, FL attempts to solve this by enabling edge devices such as mobile devices to collaboratively learn a shared prediction model with each other while keeping all its local data, private. This will however cause network problem in the edge device if not optimized properly when each client tries to independently computes an update to the current model based on its data and sends the update back to the server for computation. The sheer amount of data just to send the model back and forth to the server would be too huge for a low computing resource such as an IoT device. Hence, communication costs is of utmost importance when edge devices are being studied in this framework.
No matter the amount of issues brought about from the emergence of FL framework, its core technology still outweighs many of shortcoming. One – Data privacy of every client is preserved throughout all the training. Two – It is way more efficient at generalization such as getting raw data from edge devices from all over world as opposed to just training on one client, repeatedly. Undeniably, FL would be the future of ML paradigm.
In this report, discussion on FL research with respect to edge devices will be made. By means of comparison and propose ways that would help in the reduction of communication costs by a substantial amount through compression, quantization, with and without privacy preserving techniques.
Implementation will be performed and demonstrated solely on Google’s TensorFlow. |
---|