Search

Workshop

Convergence rates for the stochastic gradient descent method for non-convex objective functions

  • Benjamin Fehrman (University of Oxford, Oxford, United Kingdom)
E1 05 (Leibniz-Saal)

Abstract

In this talk, which is based on joint work with Benjamin Gess and Arnulf Jentzen, we establish a rate of convergence to minima for the stochastic gradient descent method in the case of an objective function that is not necessarily globally, or locally, convex nor globally attracting. We do not assume that the critical points of the objective function are nondegenerate, which allows for the type degeneracies observed practically in the optimization of certain neural networks. Our analysis and estimates rely on the use of mini-batches in a quantitative way in order to control the loss of iterates to non-attracting regions.

Saskia Gutzschebauch

Max-Planck-Institut für Mathematik in den Naturwissenschaften Contact via Mail

Max Pfeffer

Max-Planck-Institut für Mathematik in den Naturwissenschaften, Leipzig