Talk
On the relation between over- and reparametrization of gradient descent
- Johannes Maly (LMU München)
Abstract
A core challenge in mathematical data science is to understand and leverage intrinsic structures of sets. With the rise of deep learning, however, the focus has been shifting more and more from explicit structural regularization in inverse problems and related fields to implicit regularization in massively overparametrized machine learning models. In this talk, I will present theoretical results on the implicit bias of gradient descent in overparametrized linear regression and matrix factorization. I will particularly discuss the role of reparametrization in the presented theory.