Zusammenfassung für den Vortrag am 06.04.2023 (17:00 Uhr)

Math Machine Learning seminar MPI MIS + UCLA

Jaehoon Lee (Google Brain)
Exploring Infinite-Width Limit of Deep Neural Networks
Siehe auch das Video dieses Vortrages.

In this talk, I will discuss our research on understanding the infinite-width limit of neural networks. In this limit, neural networks correspond to Neural Network Gaussian Processes (NNGPs) and Neural Tangent Kernels (NTKs). I will first describe our empirical study exploring the relationship between wide neural networks and neural kernel methods. Our study resolves a variety of open questions related to infinitely wide neural networks and opens up new interesting questions. In the second half of the talk, I will discuss our recent work on scaling up infinite-width neural kernel methods to millions of data points. There are unique challenges in scaling up neural kernel methods, and I will talk about our attempts to overcome them. If there is time, I will discuss some applications of the infinite-width limit of neural networks, such as dataset distillation, neural architecture search, uncertainty quantification, and neural scaling laws.


18.04.2023, 14:54