Search

Talk

Towards Lower Bounds on the Depth of ReLU Neural Networks

  • Christoph Hertrich (TU Berlin)
Live Stream

Abstract

We contribute to a better understanding of the class of functions that is represented by a neural network with ReLU activations and a given architecture. Using techniques from mixed-integer optimization, polyhedral theory, and tropical geometry, we provide a mathematical counterbalance to the universal approximation theorems which suggest that a single hidden layer is sufficient for learning tasks. In particular, we investigate whether the class of exactly representable functions strictly increases by adding more layers (with no restrictions on size). We also present upper bounds on the sizes of neural networks required for exact function representation. This is joint work with Amitabh Basu, Marco Di Summa, and Martin Skutella.

Links

seminar
5/2/24 5/16/24

Math Machine Learning seminar MPI MIS + UCLA

MPI for Mathematics in the Sciences Live Stream

Katharina Matschke

MPI for Mathematics in the Sciences Contact via Mail

Upcoming Events of This Seminar