Abstract for the talk on 11.03.2021 (17:00 h)

Math Machine Learning seminar MPI MIS + UCLA

Spencer Frei (Department of Statistics, UCLA)
Generalization of SGD-trained neural networks of any width in the presence of adversarial label noise
11.03.2021, 17:00 h, only video broadcast

Can overparameterized neural networks trained by SGD provably generalize when the labels are corrupted with substantial random noise? We answer this question in the affirmative by showing that for a broad class of distributions, one-hidden-layer networks trained by SGDgeneralize when the distribution is linearly separable but corrupted with adversarial label noise, despite the capacity to overfit. Equivalently, such networks have classification accuracy competitive with that of the best halfspace over the distribution. Our results hold for networks of arbitrary width and for arbitrary initializations of SGD. In particular, we do not rely upon the approximations to infinite width networks that are typically used in theoretical analyses of SGD-trained neural networks.

If you want to participate in this video broadcast please register using this special form. The (Zoom) link for the video broadcast will be sent to your email address one day before the seminar.

04.03.2021, 08:50