Search

Talk

A Unified Approach to Controlling Implicit Regularization via Mirror Descent

  • Haoyuan Sun (MIT)
Live Stream

Abstract

Inspired by the remarkable success of deep neural networks, there has been significant interest in understanding the generalization performance of overparameterized models. Substantial efforts have been invested in characterizing how optimization algorithms impact generalization through their “preferred” solutions, a phenomenon commonly referred to as implicit regularization. For instance, it has been argued that gradient descent (GD) induces an implicit $\ell_2$-norm regularization in regression and classification problems. However, in prior literature, the implicit regularization of different algorithms are confined to either a specific geometry or a particular class of learning problems. To address this gap, we present a unified approach using mirror descent (MD), a notable generalization of GD, to control implicit regularization in both regression and classification settings. More specifically, we show that MD with the general class of homogeneous potential functions converges in direction to a generalized maximum-margin solution for linear classification problems, thereby answering a long-standing question in the classification setting. Furthermore, MD can be implemented efficiently and we demonstrate through experiments that MD is a versatile method to produce learned models with different regularizers and different generalization performances.

Links

seminar
05.12.24 19.12.24

Math Machine Learning seminar MPI MIS + UCLA

MPI for Mathematics in the Sciences Live Stream

Katharina Matschke

MPI for Mathematics in the Sciences Contact via Mail

Upcoming Events of this Seminar