Talk
Optimization Algorithms for Training Over-Parameterized Models
- Mark Schmidt (University of British Columbia)
Abstract
Over-parameterized machine learning models lead to excellent performance in a variety of applications. In this talk we consider the effect of over-parameterization on stochastic optimization algorithms. We discuss how over-parameterization allows us to use a constant step size within stochastic gradient methods, and that this leads to a faster convergence rate. We also present algorithms with provably-faster convergence rates in the over-parameterized setting. Finally, we discuss how over-parameterization allows us to update the learning rate during the training procedure which leads to improved performance over a variety of previous approaches.