Sketch-based image synthesis with pre-trained text-to-image models
This paper presents the development of Inpainting with ControlNet - ComfyUI, a novel workflow designed to seamlessly integrate the capabilities of stable diffusion models with ControlNets in the ComfyUI platform. This approach enables users to generate edited images by providing a combination of an...
Saved in:
Main Author: | Ng, Samuel I-Shen |
---|---|
Other Authors: | Xingang Pan |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/181525 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Text2Human: text-driven controllable human image generation
by: Jiang, Yuming, et al.
Published: (2022) -
DEEP LEARNING APPROACHES FOR ATTRIBUTE MANIPULATION AND TEXT-TO-IMAGE SYNTHESIS
by: KENAN EMIR AK
Published: (2020) -
Lightweight privacy-preserving GAN framework for model training and image synthesis
by: YANG, Yang, et al.
Published: (2022) -
Cocktail: mixing multi-modality controls for text-conditional image generation
by: Hu, Minghui, et al.
Published: (2023) -
Training deep network models for accurate recognition of texts in scene images
by: Zhang, Weilun
Published: (2022)