Leveraging imperfect restoration for data availability attack

The abundance of online data is at risk of unauthorized usage in training deep learning models. To counter this, various Data Availability Attacks (DAAs) have been devised to make data unlearnable for such models by subtly perturbing the training data. However, existing attacks often excel against e...

全面介紹

Saved in:
書目詳細資料
Main Authors: Huang, Yi, Styborski, Jeremy, Lyu, Mingzhi, Wang, Fan, Kong, Adams Wai Kin
其他作者: Interdisciplinary Graduate School (IGS)
格式: Conference or Workshop Item
語言:English
出版: 2024
主題:
在線閱讀:https://hdl.handle.net/10356/179131
https://eccv.ecva.net/virtual/2024/poster/1216
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:The abundance of online data is at risk of unauthorized usage in training deep learning models. To counter this, various Data Availability Attacks (DAAs) have been devised to make data unlearnable for such models by subtly perturbing the training data. However, existing attacks often excel against either Supervised Learning (SL) or Self-Supervised Learning (SSL) scenarios. Among these, a model-free approach that generates a Convolution-based Unlearnable Dataset (CUDA) stands out as the most robust DAA across both SSL and SL. Nonetheless, CUDA's effectiveness against SSL is underwhelming and it faces a severe trade-off between image quality and its poisoning effect. In this paper, we conduct a theoretical analysis of CUDA, uncovering the sub-optimal gradients it introduces and elucidating the strategy it employs to induce class-wise bias for data poisoning. Building on this, we propose a novel poisoning method named Imperfect Restoration Poisoning (IRP), aiming to preserve high image quality while achieving strong poisoning effects. Through extensive comparisons of IRP with eight baselines across SL and SSL, coupled with evaluations alongside five representative defense methods, we showcase the superiority of IRP. Code:https://github.com/lyumingzhi/IRP