4-bit shampoo for memory-efficient network training

Second-order optimizers, maintaining a matrix termed a preconditioner, are superior to first-order optimizers in both theory and practice. The states forming the preconditioner and its inverse root restrict the maximum size of models trained by second-order optimizers. To address this, compressing 3...

Full description

Saved in:
Bibliographic Details
Main Authors: WANG, Sike, ZHOU, Pan, LI, Jia, HUANG, Hua
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9731
https://ink.library.smu.edu.sg/context/sis_research/article/10731/viewcontent/4_bit.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10731
record_format dspace
spelling sg-smu-ink.sis_research-107312024-12-16T06:54:34Z 4-bit shampoo for memory-efficient network training WANG, Sike ZHOU, Pan LI, Jia HUANG, Hua Second-order optimizers, maintaining a matrix termed a preconditioner, are superior to first-order optimizers in both theory and practice. The states forming the preconditioner and its inverse root restrict the maximum size of models trained by second-order optimizers. To address this, compressing 32-bit optimizer states to lower bitwidths has shown promise in reducing memory usage. However, current approaches only pertain to first-order optimizers. In this paper, we propose the first 4-bit second-order optimizers, exemplified by 4-bit Shampoo, maintaining performance similar to that of 32-bit ones. We show that quantizing the eigenvector matrix of the preconditioner in 4-bit Shampoo is remarkably better than quantizing the preconditioner itself both theoretically and experimentally. By rectifying the orthogonality of the quantized eigenvector matrix, we enhance the approximation of the preconditioner's eigenvector matrix, which also benefits the computation of its inverse 4-th root. Besides, we find that linear square quantization slightly outperforms dynamic tree quantization when quantizing second-order optimizer states. Evaluation on various networks for image classification and natural language modeling demonstrates that our 4-bit Shampoo achieves comparable performance to its 32-bit counterpart while being more memory-efficient. 2024-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9731 https://ink.library.smu.edu.sg/context/sis_research/article/10731/viewcontent/4_bit.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Optimizers Preconditioner Memory efficiency OS and Networks
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Optimizers
Preconditioner
Memory efficiency
OS and Networks
spellingShingle Optimizers
Preconditioner
Memory efficiency
OS and Networks
WANG, Sike
ZHOU, Pan
LI, Jia
HUANG, Hua
4-bit shampoo for memory-efficient network training
description Second-order optimizers, maintaining a matrix termed a preconditioner, are superior to first-order optimizers in both theory and practice. The states forming the preconditioner and its inverse root restrict the maximum size of models trained by second-order optimizers. To address this, compressing 32-bit optimizer states to lower bitwidths has shown promise in reducing memory usage. However, current approaches only pertain to first-order optimizers. In this paper, we propose the first 4-bit second-order optimizers, exemplified by 4-bit Shampoo, maintaining performance similar to that of 32-bit ones. We show that quantizing the eigenvector matrix of the preconditioner in 4-bit Shampoo is remarkably better than quantizing the preconditioner itself both theoretically and experimentally. By rectifying the orthogonality of the quantized eigenvector matrix, we enhance the approximation of the preconditioner's eigenvector matrix, which also benefits the computation of its inverse 4-th root. Besides, we find that linear square quantization slightly outperforms dynamic tree quantization when quantizing second-order optimizer states. Evaluation on various networks for image classification and natural language modeling demonstrates that our 4-bit Shampoo achieves comparable performance to its 32-bit counterpart while being more memory-efficient.
format text
author WANG, Sike
ZHOU, Pan
LI, Jia
HUANG, Hua
author_facet WANG, Sike
ZHOU, Pan
LI, Jia
HUANG, Hua
author_sort WANG, Sike
title 4-bit shampoo for memory-efficient network training
title_short 4-bit shampoo for memory-efficient network training
title_full 4-bit shampoo for memory-efficient network training
title_fullStr 4-bit shampoo for memory-efficient network training
title_full_unstemmed 4-bit shampoo for memory-efficient network training
title_sort 4-bit shampoo for memory-efficient network training
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9731
https://ink.library.smu.edu.sg/context/sis_research/article/10731/viewcontent/4_bit.pdf
_version_ 1819113121763033088