Exploring pre-trained diffusion models in a tuning-free manner
Diffusion models, which utilize a multi-step denoising sampling procedure and leverage extensive image-text pair datasets for training, have emerged as an innovative option among deep generative models. These models exhibit superior performance across various applications, including image synthes...
Saved in:
Main Author: | Wang, Jinghao |
---|---|
Other Authors: | Liu Ziwei |
Format: | Thesis-Master by Research |
Language: | English |
Published: |
Nanyang Technological University
2025
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/181937 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Exploring the use of pre-trained transformer-based models and semi-supervised learning to build training sets for text classification
by: Te, Gian Marco I.
Published: (2022) -
Label-free deeply subwavelength optical microscopy
by: Pu, T., et al.
Published: (2020) -
On training deep neural networks using a streaming approach
by: Duda, Piotr, et al.
Published: (2020) -
A TRAINING FRAMEWORK AND ARCHITECTURAL DESIGN OF DISTRIBUTED DEEP LEARNING
by: WANG WEI
Published: (2017) -
ON THE EMPIRICAL POINT-WISE PRIVACY DYNAMICS OF DEEP LEARNING MODELS
by: LIU PHILIPPE, CHENG-JIE, MARC
Published: (2023)