Talk
Understanding Neural Network Expressivity via Polyhedral Geometry
- Christoph Hertrich (UTN Nuremberg)
Abstract
Neural networks with rectified linear unit (ReLU) activations are one of the standard models in modern machine learning. Despite their practical importance, fundamental theoretical questions concerning ReLU networks remain open until today. For instance, what is the precise set of (piecewise linear) functions representable by ReLU networks with a given depth? And what functions can we represent with polynomial-size neural networks? In this talk I will report about recent progress towards resolving such questions using techniques from polyhedral geometry and combinatorial optimization. This talk is based on arxiv.org/abs/2505.14338, arxiv.org/abs/2411.03006, and related papers / preprints.no abstract available