Search

Talk

Implicit bias in machine learning

  • Ohad Shamir (Weizmann Institute of Science)
Live Stream

Abstract

Most practical algorithms for supervised machine learning boil down to optimizing the average performance over a training dataset. However, it is increasingly recognized that although the optimization objective is the same, the manner in which it is optimized plays a decisive role in the properties of the resulting predictor. For example, when training large neural networks, there are generally many weight combinations that will perfectly fit the training data. However, gradient-based training methods somehow tend to reach those which, on the one hand, do not overfit, and on the other hand, are very brittle to adversarially crafted examples. Why the dynamics of these methods lead to such "implicit biases" is still far from being fully understood. In this talk, I'll describe several recent theoretical results related to this question, in the context of benign overfitting and adversarial examples.

Based on joint work with Gal Vardi and Gilad Yehudai.

Links

seminar
05.12.24 19.12.24

Math Machine Learning seminar MPI MIS + UCLA

MPI for Mathematics in the Sciences Live Stream

Katharina Matschke

MPI for Mathematics in the Sciences Contact via Mail

Upcoming Events of this Seminar