Abstract for the talk on 14.05.2020 (17:00 h)Math Machine Learning seminar MPI MIS + UCLA
Benjamin Fehrman (University of Oxford)
Convergence rates for the stochastic gradient descent method for non-convex objective functions
See the video of this talk.
In this talk, we establish a rate of convergence to minima for the stochastic gradient descent method in the case of an objective function that is not necessarily globally, or locally, convex nor globally attracting. The analysis therefore relies on the use of mini-batches in a quantitative way to control the loss of iterates to non-attracting regions. We furthermore do not assume that the critical points of the objective function are nondegenerate, which allows to treat the type degeneracies observed practically in the optimization of certain neural networks.