Abstract for the talk on 07.04.2022 (17:00 h)Math Machine Learning seminar MPI MIS + UCLA
Chaoyue Liu (Ohio State University)
Transition to Linearity of Wide Neural Networks
See the video of this talk.
In this talk, I will discuss a remarkable phenomenon of wide neural networks: transition to linearity. The phenomenon can be described as follow: as network width increases to infinity, the neural network becomes a linear model with respect to its parameters, in any O(1)-neighborhoods of the random initialization. This phenomenon underlines the widely known constancy of neural tangent kernel of certain infinitely wide neural networks. Aiming to give a more intuitive understand, I will describe in a geometric point of view, how this somehow unexpected phenomenon, as well as the constancy of NTK, arises as a natural consequence of assembling a large number of submodels.