Data protection with unlearnable examples
The pervasive success of deep learning across diverse fields hinges on the extensive use of large datasets, which often contain sensitive personal information collected without explicit consent. This practice has raised significant privacy concerns, prompting the development of unlearnable examples...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/177180 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-177180 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1771802024-05-31T15:43:43Z Data protection with unlearnable examples Ma, Xiaoyu Alex Chichung Kot School of Electrical and Electronic Engineering Rapid-Rich Object Search (ROSE) Lab EACKOT@ntu.edu.sg Computer and Information Science Engineering Deep learning Unlearnable examples Data protection The pervasive success of deep learning across diverse fields hinges on the extensive use of large datasets, which often contain sensitive personal information collected without explicit consent. This practice has raised significant privacy concerns, prompting the development of unlearnable examples (UE) as a novel data protection strategy. Unlearnable examples aim to modify data with subtle perturbations that, while imperceptible to humans, prevent machine learning models from effectively learning from them. Existing research has primarily focused on unimodal data, such as images, leaving a gap in the study of UE for multimodal data, which includes complex interactions between different data types like video and audio. This project explores the extension of UE techniques to multimodal learning environments, addressing the unique challenges posed by these datasets. By innovating and testing new UE strategies tailored for multimodal data and assessing their impact on model learning and data interpretability, this study aims to advance the field of data privacy in deep learning. Through a comprehensive survey of current UE technology, experimentation with multimodal datasets like CREMA-D and Kinetics-Sounds, and rigorous analysis, the project seeks to enhance privacy protections in multimodal deep learning frameworks, offering insights and practical solutions for the creation of robust and transferable unlearnable examples. Bachelor's degree 2024-05-27T04:20:02Z 2024-05-27T04:20:02Z 2024 Final Year Project (FYP) Ma, X. (2024). Data protection with unlearnable examples. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/177180 https://hdl.handle.net/10356/177180 en A3079-231 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science Engineering Deep learning Unlearnable examples Data protection |
spellingShingle |
Computer and Information Science Engineering Deep learning Unlearnable examples Data protection Ma, Xiaoyu Data protection with unlearnable examples |
description |
The pervasive success of deep learning across diverse fields hinges on the extensive use of large datasets, which often contain sensitive personal information collected without explicit consent. This practice has raised significant privacy concerns, prompting the development of unlearnable examples (UE) as a novel data protection strategy. Unlearnable examples aim to modify data with subtle perturbations that, while imperceptible to humans, prevent machine learning models from effectively learning from them. Existing research has primarily focused on unimodal data, such as images, leaving a gap in the study of UE for multimodal data, which includes complex interactions between different data types like video and audio. This project explores the extension of UE techniques to multimodal learning environments, addressing the unique challenges posed by these datasets. By innovating and testing new UE strategies tailored for multimodal data and assessing their impact on model learning and data interpretability, this study aims to advance the field of data privacy in deep learning. Through a comprehensive survey of current UE technology, experimentation with multimodal datasets like CREMA-D and Kinetics-Sounds, and rigorous analysis, the project seeks to enhance privacy protections in multimodal deep learning frameworks, offering insights and practical solutions for the creation of robust and transferable unlearnable examples. |
author2 |
Alex Chichung Kot |
author_facet |
Alex Chichung Kot Ma, Xiaoyu |
format |
Final Year Project |
author |
Ma, Xiaoyu |
author_sort |
Ma, Xiaoyu |
title |
Data protection with unlearnable examples |
title_short |
Data protection with unlearnable examples |
title_full |
Data protection with unlearnable examples |
title_fullStr |
Data protection with unlearnable examples |
title_full_unstemmed |
Data protection with unlearnable examples |
title_sort |
data protection with unlearnable examples |
publisher |
Nanyang Technological University |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/177180 |
_version_ |
1806059884628672512 |