Differential privacy in peer-to-peer federated learning

Neural networks have become tremendously successful in recent times due to larger computing power and availability of tagged datasets for various applications. Training these networks is computationally demanding and often requires proprietary datasets to yield usable insights. In order to incent...

Full description

Saved in:
Bibliographic Details
Main Author: Rajkumar, Snehaa
Other Authors: Anupam Chattopadhyay
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/165929
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Neural networks have become tremendously successful in recent times due to larger computing power and availability of tagged datasets for various applications. Training these networks is computationally demanding and often requires proprietary datasets to yield usable insights. In order to incentivise stakeholders to share their datasets in order to build stronger neural networks and protect their privacy interests, it is important to implement differential privacy mechanisms during the training of neural networks to protect against attacks that might expose their data to malicious agents. The objective of this project is to study the effectiveness of differential privacy implementation on peer-to-peer federated learning in protecting proprietary data from exposure.