Search
Talk

Learning deep linear neural networks: Riemannian gradient flows and convergence to global minimizers

  • Ulrich Terstiege (Rheinisch-Westfälische Technische Hochschule Aachen)
Live Stream

Abstract

We study the convergence of gradient flows related to learning deep linear neural networks (where the activation function is the identity map) from data. In this case, the composition of the network layers amounts to simply multiplying the weight matrices of all layers together, resulting in an overparameterized problem. The gradient flow with respect to these factors can (for suitable initializations) be re-interpreted as a Riemannian gradient flow on the manifold of rank-r matrices endowed with a suitable Riemannian metric. We show that the flow always converges to a critical point of the underlying functional. Moreover, we establish that, for almost all initializations, the flow converges to a global minimum on the manifold of rank k matrices for some k less or equal to r.

This is joint work with Bubacarr Bah, Holger Rauhut, and Michael Westdickenberg.

 

seminar
3/7/24 4/4/24

Math Machine Learning seminar MPI MIS + UCLA

MPI for Mathematics in the Sciences Live Stream

Katharina Matschke

MPI for Mathematics in the Sciences Contact via Mail

Upcoming Events of This Seminar