Search

MiS Preprint Repository

We have decided to discontinue the publication of preprints on our preprint server as of 1 March 2024. The publication culture within mathematics has changed so much due to the rise of repositories such as ArXiV (www.arxiv.org) that we are encouraging all institute members to make their preprints available there. An institute's repository in its previous form is, therefore, unnecessary. The preprints published to date will remain available here, but we will not add any new preprints here.

MiS Preprint
64/2020

Optimization Theory for ReLU Neural Networks Trained with Normalization Layers

Yonatan Dukler, Quanquan Gu and Guido Montúfar

Abstract

The success of deep neural networks is in part due to the use of normalization layers. Normalization layers like Batch Normalization, Layer Normalization and Weight Normalization are ubiquitous in practice, as they improve generalization performance and speed up training significantly. Nonetheless, the vast majority of current deep learning theory and non-convex optimization literature focuses on the un-normalized setting, where the functions under consideration do not exhibit the properties of commonly normalized neural networks. In this paper, we bridge this gap by giving the first global convergence result for two-layer neural networks with ReLU activations trained with a normalization layer, namely Weight Normalization. Our analysis shows how the introduction of normalization layers changes the optimization landscape and can enable faster convergence as compared with un-normalized neural networks.

Received:
Jun 12, 2020
Published:
Jun 12, 2020
Keywords:
Overparametrized Neural Network, Neural Tangent Kernel, Weight Normalization

Related publications

inBook
2020 Journal Open Access
Yonatan Dukler, Quanquan Gu and Guido Montúfar

Optimization theory for ReLU neural networks trained with normalization layers

In: ICML 2020 : Proceedings of the 37th international conference on machine learning ; 13-18 July 2020
[s. l.] : PMLR, 2020. - pp. 2751-2760
(Proceedings of machine learning research ; 119)