Differential privacy and membership inference attacks

The growing use of machine learning on various datasets results in privacy concerns about records of the data being leaked. Membership inference is a type of attack that identifies the members of the training dataset. The research studies a privacy-preserving mechanism, differential privacy, to miti...

Full description

Saved in:
Bibliographic Details
Main Author: Ong, Ting Yu
Other Authors: Wang Huaxiong
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/166457
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:The growing use of machine learning on various datasets results in privacy concerns about records of the data being leaked. Membership inference is a type of attack that identifies the members of the training dataset. The research studies a privacy-preserving mechanism, differential privacy, to mitigate membership inference attacks. Generally, there is a lack of studies that include the two mentioned concepts: membership inference and differential privacy. This research extends the concepts to the less-tested datasets to understand the interaction between the concepts. Image, Time Series and Natural Language Processing datasets were used to train the target models and the reference models. As expected, differential privacy does hinder the membership inference attack by reducing it to a random guess for Image Dataset. However, for the other types of data, there are no observable changes before and after the implementation of differential privacy. Hence, the implementation of differential privacy was able to maintain the attack at a random guess level, suggesting that implementing differential privacy can help to mitigate the membership inference attack.