Mitigating membership inference attacks via weighted smoothing

Recent advancements in deep learning have spotlighted a crucial privacy vulnerability to membership inference attack (MIA), where adversaries can determine if specific data was present in a training set, thus potentially revealing sensitive information. In this paper, we introduce a technique, weigh...

Full description

Saved in:
Bibliographic Details
Main Authors: TAN, Minghan, XIE, Xiaofei, SUN, Jun, WANG, Tianhao
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8613
https://ink.library.smu.edu.sg/context/sis_research/article/9616/viewcontent/MitigatingMembership_pvoa_cc_by.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:Recent advancements in deep learning have spotlighted a crucial privacy vulnerability to membership inference attack (MIA), where adversaries can determine if specific data was present in a training set, thus potentially revealing sensitive information. In this paper, we introduce a technique, weighted smoothing (WS), to mitigate MIA risks. Our approach is anchored on the observation that training samples differ in their vulnerability to MIA, primarily based on their distance to clusters of similar samples. The intuition is clusters will make model predictions more confident and increase MIA risks. Thus WS strategically introduces noise to training samples, depending on whether they are near a cluster or isolated. We evaluate WS against MIAs on multiple benchmark datasets and model architectures, demonstrating its effectiveness. We publish code at https://github.com/BennyTMT/weighted-smoothing.