Norm-based generalisation bounds for deep multi-class convolutional neural networks

We show generalisation error bounds for deep learning with two main improvements over the state of the art. (1) Our bounds have no explicit dependence on the number of classes except for logarithmic factors. This holds even when formulating the bounds in terms of the Frobenius-norm of the weight mat...

Full description

Saved in:
Bibliographic Details
Main Authors: LEDENT, Antoine, MUSTAFA, Waleed, LEI, Yunwen, KLOFT, Marius
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2021
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7202
https://ink.library.smu.edu.sg/context/sis_research/article/8205/viewcontent/norm_based.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8205
record_format dspace
spelling sg-smu-ink.sis_research-82052022-08-04T08:50:02Z Norm-based generalisation bounds for deep multi-class convolutional neural networks LEDENT, Antoine MUSTAFA, Waleed LEI, Yunwen KLOFT, Marius We show generalisation error bounds for deep learning with two main improvements over the state of the art. (1) Our bounds have no explicit dependence on the number of classes except for logarithmic factors. This holds even when formulating the bounds in terms of the Frobenius-norm of the weight matrices, where previous bounds exhibit at least a squareroot dependence on the number of classes. (2) We adapt the classic Rademacher analysis of DNNs to incorporate weight sharing—a task of fundamental theoretical importance which was previously attempted only under very restrictive assumptions. In our results, each convolutional filter contributes only once to the bound, regardless of how many times it is applied. Further improvements exploiting pooling and sparse connections are provided. The presented bounds scale as the norms of the parameter matrices, rather than the number of parameters. In particular, contrary to bounds based on parameter counting, they are asymptotically tight (up to log factors) when the weights approach initialisation, making them suitable as a basic ingredient in bounds sensitive to the optimisation procedure. We also show how to adapt the recent technique of loss function augmentation to replace spectral norms by empirical analogues whilst maintaining the advantages of our approach. 2021-02-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7202 https://ink.library.smu.edu.sg/context/sis_research/article/8205/viewcontent/norm_based.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University (Deep) Neural Network Learning Theory Learning Theory Artificial Intelligence and Robotics Theory and Algorithms
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic (Deep) Neural Network Learning Theory
Learning Theory
Artificial Intelligence and Robotics
Theory and Algorithms
spellingShingle (Deep) Neural Network Learning Theory
Learning Theory
Artificial Intelligence and Robotics
Theory and Algorithms
LEDENT, Antoine
MUSTAFA, Waleed
LEI, Yunwen
KLOFT, Marius
Norm-based generalisation bounds for deep multi-class convolutional neural networks
description We show generalisation error bounds for deep learning with two main improvements over the state of the art. (1) Our bounds have no explicit dependence on the number of classes except for logarithmic factors. This holds even when formulating the bounds in terms of the Frobenius-norm of the weight matrices, where previous bounds exhibit at least a squareroot dependence on the number of classes. (2) We adapt the classic Rademacher analysis of DNNs to incorporate weight sharing—a task of fundamental theoretical importance which was previously attempted only under very restrictive assumptions. In our results, each convolutional filter contributes only once to the bound, regardless of how many times it is applied. Further improvements exploiting pooling and sparse connections are provided. The presented bounds scale as the norms of the parameter matrices, rather than the number of parameters. In particular, contrary to bounds based on parameter counting, they are asymptotically tight (up to log factors) when the weights approach initialisation, making them suitable as a basic ingredient in bounds sensitive to the optimisation procedure. We also show how to adapt the recent technique of loss function augmentation to replace spectral norms by empirical analogues whilst maintaining the advantages of our approach.
format text
author LEDENT, Antoine
MUSTAFA, Waleed
LEI, Yunwen
KLOFT, Marius
author_facet LEDENT, Antoine
MUSTAFA, Waleed
LEI, Yunwen
KLOFT, Marius
author_sort LEDENT, Antoine
title Norm-based generalisation bounds for deep multi-class convolutional neural networks
title_short Norm-based generalisation bounds for deep multi-class convolutional neural networks
title_full Norm-based generalisation bounds for deep multi-class convolutional neural networks
title_fullStr Norm-based generalisation bounds for deep multi-class convolutional neural networks
title_full_unstemmed Norm-based generalisation bounds for deep multi-class convolutional neural networks
title_sort norm-based generalisation bounds for deep multi-class convolutional neural networks
publisher Institutional Knowledge at Singapore Management University
publishDate 2021
url https://ink.library.smu.edu.sg/sis_research/7202
https://ink.library.smu.edu.sg/context/sis_research/article/8205/viewcontent/norm_based.pdf
_version_ 1770576269074759680