Sketch-based image synthesis with pre-trained text-to-image models
This paper presents the development of Inpainting with ControlNet - ComfyUI, a novel workflow designed to seamlessly integrate the capabilities of stable diffusion models with ControlNets in the ComfyUI platform. This approach enables users to generate edited images by providing a combination of an...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/181525 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-181525 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1815252024-12-09T07:11:50Z Sketch-based image synthesis with pre-trained text-to-image models Ng, Samuel I-Shen Xingang Pan College of Computing and Data Science xingang.pan@ntu.edu.sg Computer and Information Science Image generation This paper presents the development of Inpainting with ControlNet - ComfyUI, a novel workflow designed to seamlessly integrate the capabilities of stable diffusion models with ControlNets in the ComfyUI platform. This approach enables users to generate edited images by providing a combination of an image, a mask, and a sketch, resulting in coherent and context-aware outputs that closely match the surrounding area. By leveraging the strengths of both stable diffusion models and ControlNets, our method provides a more efficient, effective, and user-friendly approach to image inpainting. The integration of ControlNets with inpainting models allows users to harness the power of text-to-image models while also providing additional guidance through image input, bridging the gap between user intent and the editing process. This solution has far-reaching implications, particularly in the context of image editing and manipulation. Our method has shown promising results, but there are still areas that require further investigation and improvement. Potential avenues for future research include exploring the feasibility of integrating models lacking ControlNets, investigating the benefits and limitations of using different models, and developing strategies for accurately specifying desired colours via a coloured sketch. The potential applications of this approach are vast, and further research and development could lead to even more innovative and powerful tools for image editing and manipulation. Bachelor's degree 2024-12-09T07:11:49Z 2024-12-09T07:11:49Z 2024 Final Year Project (FYP) Ng, S. I. (2024). Sketch-based image synthesis with pre-trained text-to-image models. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/181525 https://hdl.handle.net/10356/181525 en SCSE23-1125 application/pdf application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science Image generation |
spellingShingle |
Computer and Information Science Image generation Ng, Samuel I-Shen Sketch-based image synthesis with pre-trained text-to-image models |
description |
This paper presents the development of Inpainting with ControlNet - ComfyUI, a novel workflow designed to seamlessly integrate the capabilities of stable diffusion models with ControlNets in the ComfyUI platform. This approach enables users to generate edited images by providing a combination of an image, a mask, and a sketch, resulting in coherent and context-aware outputs that closely match the surrounding area. By leveraging the strengths of both stable diffusion models and ControlNets, our method provides a more efficient, effective, and user-friendly approach to image inpainting. The integration of ControlNets with inpainting models allows users to harness the power of text-to-image models while also providing additional guidance through image input, bridging the gap between user intent and the editing process. This solution has far-reaching implications, particularly in the context of image editing and manipulation. Our method has shown promising results, but there are still areas that require further investigation and improvement. Potential avenues for future research include exploring the feasibility of integrating models lacking ControlNets, investigating the benefits and limitations of using different models, and developing strategies for accurately specifying desired colours via a coloured sketch. The potential applications of this approach are vast, and further research and development could lead to even more innovative and powerful tools for image editing and manipulation. |
author2 |
Xingang Pan |
author_facet |
Xingang Pan Ng, Samuel I-Shen |
format |
Final Year Project |
author |
Ng, Samuel I-Shen |
author_sort |
Ng, Samuel I-Shen |
title |
Sketch-based image synthesis with pre-trained text-to-image models |
title_short |
Sketch-based image synthesis with pre-trained text-to-image models |
title_full |
Sketch-based image synthesis with pre-trained text-to-image models |
title_fullStr |
Sketch-based image synthesis with pre-trained text-to-image models |
title_full_unstemmed |
Sketch-based image synthesis with pre-trained text-to-image models |
title_sort |
sketch-based image synthesis with pre-trained text-to-image models |
publisher |
Nanyang Technological University |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/181525 |
_version_ |
1819113007178842112 |