Abstract for the talk on 28.01.2021 (17:00 h)Math Machine Learning seminar MPI MIS + UCLA
Suriya Gunasekar (Microsoft Research, Redmond)
Rethinking the role of optimization in learning
See the video of this talk.
In this talk, I will overview recent results towards understanding how we learn large capacity machine learning models. In the modern practice of machine learning, especially deep learning, many successful models have far more trainable parameters compared to the number of training examples leading to ill-posed optimization objectives. In practice though, when such ill-posed objectives are minimized using local search algorithms like (stochastic) gradient descent ((S)GD), the "special" minimizers returned by these algorithms have remarkably good performance on new examples. In this talk, we will explore the role optimization algorithms like (S)GD in learning overparameterized models focusing on the simpler setting of learning linear predictors.
Bio: Suriya Gunasekar is a Senior Researcher in the Machine Learning Foundations group at Microsoft Research at Redmond. Prior to joining MSR, she was a Research Assistant Professor at Toyota Technological Institute at Chicago. She received her PhD in Electrical and Computer Engineering from The University of Texas at Austin.