4-bit shampoo for memory-efficient network training
Second-order optimizers, maintaining a matrix termed a preconditioner, are superior to first-order optimizers in both theory and practice. The states forming the preconditioner and its inverse root restrict the maximum size of models trained by second-order optimizers. To address this, compressing 3...
Saved in:
Main Authors: | WANG, Sike, ZHOU, Pan, LI, Jia, HUANG, Hua |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9731 https://ink.library.smu.edu.sg/context/sis_research/article/10731/viewcontent/4_bit.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
ScaleLong: Towards more stable training of diffusion model via scaling network long skip connection
by: HUANG, Zhongzhan, et al.
Published: (2023) -
Sequential recommendation with user memory networks
by: CHEN, Xu, et al.
Published: (2018) -
Quantization-aware interval bound propagation for training certifiably robust quantized neural networks
by: LECHNER, Mathias, et al.
Published: (2023) -
Efficient Data Compression with Error Bound Guarantee in Wireless Sensor Networks
by: Mohammad Abu Alsheikh,, et al.
Published: (2014) -
Win: Weight-decay-integrated Nesterov acceleration for faster network training
by: ZHOU, Pan, et al.
Published: (2024)