Search

Talk

Transition to Linearity of Wide Neural Networks

  • Chaoyue Liu (Ohio State University)
Live Stream

Abstract

In this talk, I will discuss a remarkable phenomenon of wide neural networks: transition to linearity. The phenomenon can be described as follow: as network width increases to infinity, the neural network becomes a linear model with respect to its parameters, in any O(1)-neighborhoods of the random initialization. This phenomenon underlines the widely known constancy of neural tangent kernel of certain infinitely wide neural networks. Aiming to give a more intuitive understand, I will describe in a geometric point of view, how this somehow unexpected phenomenon, as well as the constancy of NTK, arises as a natural consequence of assembling a large number of submodels.

Links

seminar
5/2/24 5/16/24

Math Machine Learning seminar MPI MIS + UCLA

MPI for Mathematics in the Sciences Live Stream

Katharina Matschke

MPI for Mathematics in the Sciences Contact via Mail

Upcoming Events of This Seminar