Additive quantization for truly tiny compressed diffusion models

Tremendous investments have been made towards the commodification of diffusion models for generation of diverse media. Their mass-market adoption is however still hobbled by the intense hardware resource requirements of diffusion model inference. Model quantization strategies tailored specificall...

全面介紹

Saved in:
書目詳細資料
主要作者: Hasan, Adil
其他作者: Thomas Peyrin
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2024
主題:
在線閱讀:https://hdl.handle.net/10356/181210
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:Tremendous investments have been made towards the commodification of diffusion models for generation of diverse media. Their mass-market adoption is however still hobbled by the intense hardware resource requirements of diffusion model inference. Model quantization strategies tailored specifically towards diffusion models have seen considerable success in easing this burden, yet without exception have explored only the Uniform Scalar Quantization (USQ) family of quantization methods. In contrast, Vector Quantization (VQ) methods, which replace groups of multiple related weights with indices into codebooks, have recently taken the parallel field of Large Language Model (LLM) quantization by storm. In this FYP project, we for the first time apply codebook-based additive vector quantization algorithms to the problem of diffusion model compression. We are rewarded with state-of-the-art results on the important class-conditional benchmark of LDM-4 on ImageNet at 20 inference time steps, in- cluding sFID as much as 1.93 points lower than the full-precision model at W4A8, the best-reported results for FID, sFID and ISC at W2A8, and the first-ever successful quantization to W1.5A8 (less than 1.5 bits stored per weight). Furthermore, our pro- posed method allows for a dynamic trade-off between quantization-time GPU hours and inference-time savings, in line with the recent trend of approaches blending the best as- pects of post-training quantization (PTQ) and quantization-aware training (QAT), and demonstrates FLOPs savings on arbitrary hardware via an efficient inference kernel.