Talk
Convergence rates for the stochastic gradient descent method for non-convex objective functions
- Benjamin Fehrman (University of Oxford)
Abstract
In this talk, we establish a rate of convergence to minima for the stochastic gradient descent method in the case of an objective function that is not necessarily globally, or locally, convex nor globally attracting. The analysis therefore relies on the use of mini-batches in a quantitative way to control the loss of iterates to non-attracting regions. We furthermore do not assume that the critical points of the objective function are nondegenerate, which allows to treat the type degeneracies observed practically in the optimization of certain neural networks.