MACE: mass concept erasure in diffusion models

The rapid expansion of large-scale text-to-image diffusion models has raised growing concerns regarding their potential misuse in creating harmful or misleading content. In this paper, we introduce MACE, a finetuning framework for the task of MAss Concept Erasure. This task aims to prevent models fr...

Full description

Saved in:
Bibliographic Details
Main Authors: Lu, Shilin, Wang, Zilan, Li, Leyang, Liu, Yanzhu, Kong, Adams Wai Kin
Other Authors: School of Computer Science and Engineering
Format: Conference or Workshop Item
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/180560
https://openaccess.thecvf.com/CVPR2024?day=all
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-180560
record_format dspace
spelling sg-ntu-dr.10356-1805602024-10-15T02:32:39Z MACE: mass concept erasure in diffusion models Lu, Shilin Wang, Zilan Li, Leyang Liu, Yanzhu Kong, Adams Wai Kin School of Computer Science and Engineering College of Computing and Data Science 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Computer and Information Science Computer vision Generative model Diffusion model Text-to-Image Concept removal Machine unlearning The rapid expansion of large-scale text-to-image diffusion models has raised growing concerns regarding their potential misuse in creating harmful or misleading content. In this paper, we introduce MACE, a finetuning framework for the task of MAss Concept Erasure. This task aims to prevent models from generating images that embody unwanted concepts when prompted. Existing concept erasure methods are typically restricted to handling fewer than five concepts simultaneously and struggle to find a balance between erasing concept synonyms (generality) and maintaining unrelated concepts (specificity). In contrast, MACE differs by successfully scaling the erasure scope up to 100 concepts and by achieving an effective balance between generality and specificity. This is achieved by leveraging closed-form cross-attention refinement along with LoRA finetuning, collectively eliminating the information of undesirable concepts. Furthermore, MACE integrates multiple LoRAs without mutual interference. We conduct extensive evaluations of MACE against prior methods across four different tasks: object erasure, celebrity erasure, explicit content erasure, and artistic style erasure. Our results reveal that MACE surpasses prior methods in all evaluated tasks. Code is available at https://github.com/Shilin-LU/MACE. National Research Foundation (NRF) Submitted/Accepted version This research is supported by National Research Foundation, Singapore under its Strategic Capability Research Centres Funding Initiative. 2024-10-15T02:32:39Z 2024-10-15T02:32:39Z 2024 Conference Paper Lu, S., Wang, Z., Li, L., Liu, Y. & Kong, A. W. K. (2024). MACE: mass concept erasure in diffusion models. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 6430-6440. https://dx.doi.org/10.1109/CVPR52733.2024.00615 979-8-3503-5300-6 2575-7075 https://hdl.handle.net/10356/180560 10.1109/CVPR52733.2024.00615 https://openaccess.thecvf.com/CVPR2024?day=all 6430 6440 en © 2024 IEEE. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder.The Version of Record is available online at http://doi.org/10.1109/CVPR52733.2024.00615. application/pdf application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Computer vision
Generative model
Diffusion model
Text-to-Image
Concept removal
Machine unlearning
spellingShingle Computer and Information Science
Computer vision
Generative model
Diffusion model
Text-to-Image
Concept removal
Machine unlearning
Lu, Shilin
Wang, Zilan
Li, Leyang
Liu, Yanzhu
Kong, Adams Wai Kin
MACE: mass concept erasure in diffusion models
description The rapid expansion of large-scale text-to-image diffusion models has raised growing concerns regarding their potential misuse in creating harmful or misleading content. In this paper, we introduce MACE, a finetuning framework for the task of MAss Concept Erasure. This task aims to prevent models from generating images that embody unwanted concepts when prompted. Existing concept erasure methods are typically restricted to handling fewer than five concepts simultaneously and struggle to find a balance between erasing concept synonyms (generality) and maintaining unrelated concepts (specificity). In contrast, MACE differs by successfully scaling the erasure scope up to 100 concepts and by achieving an effective balance between generality and specificity. This is achieved by leveraging closed-form cross-attention refinement along with LoRA finetuning, collectively eliminating the information of undesirable concepts. Furthermore, MACE integrates multiple LoRAs without mutual interference. We conduct extensive evaluations of MACE against prior methods across four different tasks: object erasure, celebrity erasure, explicit content erasure, and artistic style erasure. Our results reveal that MACE surpasses prior methods in all evaluated tasks. Code is available at https://github.com/Shilin-LU/MACE.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Lu, Shilin
Wang, Zilan
Li, Leyang
Liu, Yanzhu
Kong, Adams Wai Kin
format Conference or Workshop Item
author Lu, Shilin
Wang, Zilan
Li, Leyang
Liu, Yanzhu
Kong, Adams Wai Kin
author_sort Lu, Shilin
title MACE: mass concept erasure in diffusion models
title_short MACE: mass concept erasure in diffusion models
title_full MACE: mass concept erasure in diffusion models
title_fullStr MACE: mass concept erasure in diffusion models
title_full_unstemmed MACE: mass concept erasure in diffusion models
title_sort mace: mass concept erasure in diffusion models
publishDate 2024
url https://hdl.handle.net/10356/180560
https://openaccess.thecvf.com/CVPR2024?day=all
_version_ 1814777741958971392