Cost of federated learning

Federated Learning (FL) is term coined by Google in 2016 at a time when misuse of personal data was gaining global traction. The main goal was to build a machine learning (ML) model based on datasets that are distributed across the world, all whilst retraining personal privacy. FL enables edge devic...

Full description

Saved in:
Bibliographic Details
Main Author: Tan, Cheen Hao
Other Authors: Tan Rui
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/163344
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-163344
record_format dspace
spelling sg-ntu-dr.10356-1633442022-12-05T00:14:23Z Cost of federated learning Tan, Cheen Hao Tan Rui School of Computer Science and Engineering tanrui@ntu.edu.sg Engineering::Computer science and engineering Federated Learning (FL) is term coined by Google in 2016 at a time when misuse of personal data was gaining global traction. The main goal was to build a machine learning (ML) model based on datasets that are distributed across the world, all whilst retraining personal privacy. FL enables edge devices such as mobile phone or any Internet of Things (IoT) devices to collaboratively learn a shared prediction model while keeping all its training data intact and privately held. Traditional machine learning is usually centrally trained with a full distinct view of the entire dataset. But this will usually cause privacy issue when personal data are being transferred to the central server for ML training. Thus, FL attempts to solve this by enabling edge devices such as mobile devices to collaboratively learn a shared prediction model with each other while keeping all its local data, private. This will however cause network problem in the edge device if not optimized properly when each client tries to independently computes an update to the current model based on its data and sends the update back to the server for computation. The sheer amount of data just to send the model back and forth to the server would be too huge for a low computing resource such as an IoT device. Hence, communication costs is of utmost importance when edge devices are being studied in this framework. No matter the amount of issues brought about from the emergence of FL framework, its core technology still outweighs many of shortcoming. One – Data privacy of every client is preserved throughout all the training. Two – It is way more efficient at generalization such as getting raw data from edge devices from all over world as opposed to just training on one client, repeatedly. Undeniably, FL would be the future of ML paradigm. In this report, discussion on FL research with respect to edge devices will be made. By means of comparison and propose ways that would help in the reduction of communication costs by a substantial amount through compression, quantization, with and without privacy preserving techniques. Implementation will be performed and demonstrated solely on Google’s TensorFlow. Bachelor of Engineering (Computer Engineering) 2022-12-05T00:14:22Z 2022-12-05T00:14:22Z 2022 Final Year Project (FYP) Tan, C. H. (2022). Cost of federated learning. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/163344 https://hdl.handle.net/10356/163344 en SCSE21-0580 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
spellingShingle Engineering::Computer science and engineering
Tan, Cheen Hao
Cost of federated learning
description Federated Learning (FL) is term coined by Google in 2016 at a time when misuse of personal data was gaining global traction. The main goal was to build a machine learning (ML) model based on datasets that are distributed across the world, all whilst retraining personal privacy. FL enables edge devices such as mobile phone or any Internet of Things (IoT) devices to collaboratively learn a shared prediction model while keeping all its training data intact and privately held. Traditional machine learning is usually centrally trained with a full distinct view of the entire dataset. But this will usually cause privacy issue when personal data are being transferred to the central server for ML training. Thus, FL attempts to solve this by enabling edge devices such as mobile devices to collaboratively learn a shared prediction model with each other while keeping all its local data, private. This will however cause network problem in the edge device if not optimized properly when each client tries to independently computes an update to the current model based on its data and sends the update back to the server for computation. The sheer amount of data just to send the model back and forth to the server would be too huge for a low computing resource such as an IoT device. Hence, communication costs is of utmost importance when edge devices are being studied in this framework. No matter the amount of issues brought about from the emergence of FL framework, its core technology still outweighs many of shortcoming. One – Data privacy of every client is preserved throughout all the training. Two – It is way more efficient at generalization such as getting raw data from edge devices from all over world as opposed to just training on one client, repeatedly. Undeniably, FL would be the future of ML paradigm. In this report, discussion on FL research with respect to edge devices will be made. By means of comparison and propose ways that would help in the reduction of communication costs by a substantial amount through compression, quantization, with and without privacy preserving techniques. Implementation will be performed and demonstrated solely on Google’s TensorFlow.
author2 Tan Rui
author_facet Tan Rui
Tan, Cheen Hao
format Final Year Project
author Tan, Cheen Hao
author_sort Tan, Cheen Hao
title Cost of federated learning
title_short Cost of federated learning
title_full Cost of federated learning
title_fullStr Cost of federated learning
title_full_unstemmed Cost of federated learning
title_sort cost of federated learning
publisher Nanyang Technological University
publishDate 2022
url https://hdl.handle.net/10356/163344
_version_ 1751548501227995136