Differential privacy in machine learning

With a surge in the use of machine learning, stakeholders have no visibility into the activities of processes that were run on their private data. When it comes to sharing data to train these machine learning models, there is a rising concern about privacy. Federated learning was introduced as a...

Full description

Saved in:
Bibliographic Details
Main Author: Tan, Nicole
Other Authors: Anupam Chattopadhyay
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/156368
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:With a surge in the use of machine learning, stakeholders have no visibility into the activities of processes that were run on their private data. When it comes to sharing data to train these machine learning models, there is a rising concern about privacy. Federated learning was introduced as a type of distributed machine learning. Stakeholders will keep their data local in a federated learning approach. This alone is not enough to protect the privacy of stakeholders’ data. Attacks targeting the parameters used to train models have increased as a result of the increased usage of a federated learning approach to train models, and these attacks may possibly provide attackers access to confidential data. The objective of this project is to use federated learning to create a shared model architecture that incorporates differential privacy on various neural network architectures.