Search

Talk

Convergence rates for mean field stochastic gradient descent algorithms

  • Benjamin Fehrmann (University of Oxford)
G3 10 (Lecture hall)

Abstract

In this talk, which is based on joint work with Benjamin Gess and Arnulf Jentzen, we will discuss convergence rates for mean field stochastic gradient descent algorithms. Such algorithms play an important role in machine learning and deep learning applications, where the goal of the algorithm is to minimize a deterministic potential, such as that arising in the optimization of an artificial neural network. We do not assume that the dynamics associated to the deterministic gradient descent of the potential are globally attracting to the minimum, nor do we assume that the critical points of the potential are nondegenerate. This allows for the type degeneracies observed practically in the optimization of certain neural networks. We will furthermore show informally that the computational efficiency of the algorithm is nearly optimal in the limit that the learning rate approaches one.

seminar
4/24/18 3/19/21

Mathematics of Data Seminar

MPI for Mathematics in the Sciences Live Stream

Katharina Matschke

MPI for Mathematics in the Sciences Contact via Mail