Additive quantization for truly tiny compressed diffusion models
Tremendous investments have been made towards the commodification of diffusion models for generation of diverse media. Their mass-market adoption is however still hobbled by the intense hardware resource requirements of diffusion model inference. Model quantization strategies tailored specificall...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/181210 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Tremendous investments have been made towards the commodification of diffusion
models for generation of diverse media. Their mass-market adoption is however still
hobbled by the intense hardware resource requirements of diffusion model inference.
Model quantization strategies tailored specifically towards diffusion models have seen
considerable success in easing this burden, yet without exception have explored only
the Uniform Scalar Quantization (USQ) family of quantization methods. In contrast,
Vector Quantization (VQ) methods, which replace groups of multiple related weights
with indices into codebooks, have recently taken the parallel field of Large Language
Model (LLM) quantization by storm. In this FYP project, we for the first time apply
codebook-based additive vector quantization algorithms to the problem of diffusion
model compression. We are rewarded with state-of-the-art results on the important
class-conditional benchmark of LDM-4 on ImageNet at 20 inference time steps, in-
cluding sFID as much as 1.93 points lower than the full-precision model at W4A8,
the best-reported results for FID, sFID and ISC at W2A8, and the first-ever successful
quantization to W1.5A8 (less than 1.5 bits stored per weight). Furthermore, our pro-
posed method allows for a dynamic trade-off between quantization-time GPU hours and
inference-time savings, in line with the recent trend of approaches blending the best as-
pects of post-training quantization (PTQ) and quantization-aware training (QAT), and
demonstrates FLOPs savings on arbitrary hardware via an efficient inference kernel. |
---|