Search

Workshop

The Mathematics of Deep Learning: Can we Open the Black Box of Deep Neural Networks?

  • Gitta Kutyniok (TU Berlin)
Live Stream MPI für Mathematik in den Naturwissenschaften Leipzig (Live Stream)

Abstract

Despite the outstanding success of deep neural networks in real-world applications, most of the related research is empirically driven and a comprehensive mathematical foundation is still missing. Regarding deep learning as a statistical learning problem, the necessary theory can be divided into the research directions of expressivity, learning, and generalization. Recently, the new direction of interpretability became important as well.

In this talk, we will provide an introduction into those four research foci. We will then delve a bit deeper into the area of expressivity, namely the approximation capacity of neural network architectures as one of the most developed mathematical theories so far, and discuss some recent work. Finally, we will provide a survey about the novel and highly relevant area of interpretability, which aims at developing an understanding how a given network reaches decisions, and discuss the very first mathematically founded approach to this problem.

conference
9/10/20 9/11/20

GAMM AG Workshop Computational and Mathematical Methods in Data Science

MPI für Mathematik in den Naturwissenschaften Leipzig Live Stream

Valeria Hünniger

Max Planck Institute for Mathematics in the Sciences Contact via Mail

Max von Renesse

Leipzig University

André Uschmajew

Max Planck Institute for Mathematics in the Sciences