Friendly sharpness-aware minimization

Sharpness-Aware Minimization (SAM) has been instrumental in improving deep neural network training by minimizing both training loss and loss sharpness. Despite the practical success, the mechanisms behind SAM’s generalization enhancements remain elusive, limiting its progress in deep learning optimi...

Full description

Saved in:
Bibliographic Details
Main Authors: LI, Tao, ZHOU, Pan, HE, Zhengbao, CHENG, Xinwen, HUANG, Xiaolin
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9018
https://ink.library.smu.edu.sg/context/sis_research/article/10021/viewcontent/2024_CVPR_FSAM.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10021
record_format dspace
spelling sg-smu-ink.sis_research-100212024-07-25T08:08:03Z Friendly sharpness-aware minimization LI, Tao ZHOU, Pan HE, Zhengbao CHENG, Xinwen HUANG, Xiaolin Sharpness-Aware Minimization (SAM) has been instrumental in improving deep neural network training by minimizing both training loss and loss sharpness. Despite the practical success, the mechanisms behind SAM’s generalization enhancements remain elusive, limiting its progress in deep learning optimization. In this work, we investigate SAM’s core components for generalization improvement and introduce “Friendly-SAM” (F-SAM) to further enhance SAM’s generalization. Our investigation reveals the key role of batch-specific stochastic gradient noise within the adversarial perturbation, i.e., the current minibatch gradient, which significantly influences SAM’s generalization performance. By decomposing the adversarial perturbation in SAM into full gradient and stochastic gradient noise components, we discover that relying solely on the full gradient component degrades generalization while excluding it leads to improved performance. The possible reason lies in the full gradient component’s increase in sharpness loss for the entire dataset, creating inconsistencies with the subsequent sharpness minimization step solely on the current minibatch data. Inspired by these insights, F-SAM aims to mitigate the negative effects of the full gradient component. It removes the full gradient estimated by an exponentially moving average (EMA) of historical stochastic gradients, and then leverages stochastic gradient noise for improved generalization. Moreover, we provide theoretical validation for the EMA approximation and prove the convergence of F-SAM on non-convex problems. Extensive experiments demonstrate the superior generalization performance and robustness of F-SAM over vanilla SAM. Code is available at https://github.com/nblt/F-SAM. 2024-06-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9018 https://ink.library.smu.edu.sg/context/sis_research/article/10021/viewcontent/2024_CVPR_FSAM.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Theory and Algorithms
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Theory and Algorithms
spellingShingle Theory and Algorithms
LI, Tao
ZHOU, Pan
HE, Zhengbao
CHENG, Xinwen
HUANG, Xiaolin
Friendly sharpness-aware minimization
description Sharpness-Aware Minimization (SAM) has been instrumental in improving deep neural network training by minimizing both training loss and loss sharpness. Despite the practical success, the mechanisms behind SAM’s generalization enhancements remain elusive, limiting its progress in deep learning optimization. In this work, we investigate SAM’s core components for generalization improvement and introduce “Friendly-SAM” (F-SAM) to further enhance SAM’s generalization. Our investigation reveals the key role of batch-specific stochastic gradient noise within the adversarial perturbation, i.e., the current minibatch gradient, which significantly influences SAM’s generalization performance. By decomposing the adversarial perturbation in SAM into full gradient and stochastic gradient noise components, we discover that relying solely on the full gradient component degrades generalization while excluding it leads to improved performance. The possible reason lies in the full gradient component’s increase in sharpness loss for the entire dataset, creating inconsistencies with the subsequent sharpness minimization step solely on the current minibatch data. Inspired by these insights, F-SAM aims to mitigate the negative effects of the full gradient component. It removes the full gradient estimated by an exponentially moving average (EMA) of historical stochastic gradients, and then leverages stochastic gradient noise for improved generalization. Moreover, we provide theoretical validation for the EMA approximation and prove the convergence of F-SAM on non-convex problems. Extensive experiments demonstrate the superior generalization performance and robustness of F-SAM over vanilla SAM. Code is available at https://github.com/nblt/F-SAM.
format text
author LI, Tao
ZHOU, Pan
HE, Zhengbao
CHENG, Xinwen
HUANG, Xiaolin
author_facet LI, Tao
ZHOU, Pan
HE, Zhengbao
CHENG, Xinwen
HUANG, Xiaolin
author_sort LI, Tao
title Friendly sharpness-aware minimization
title_short Friendly sharpness-aware minimization
title_full Friendly sharpness-aware minimization
title_fullStr Friendly sharpness-aware minimization
title_full_unstemmed Friendly sharpness-aware minimization
title_sort friendly sharpness-aware minimization
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9018
https://ink.library.smu.edu.sg/context/sis_research/article/10021/viewcontent/2024_CVPR_FSAM.pdf
_version_ 1814047693889077248