Talk
Maxout polytopes
- Shelby Cox (MPI MiS, Leipzig)
Abstract
Maxout polytopes are defined by feedforward neural networks with 2-maxout activation and non-negative weights after the first layer. Fixing the number of nodes in each layer and varying the network weights yields a family of maxout polytopes. I will discuss the parameter spaces and extremal f-vectors for these families of maxout polytopes. I will also show that when the network has no bottlenecks, the generic maxout polytopes are cubical. A key construction is the separating hypersurface of two normally equivalent polytopes, which arises when a layer is added to the network.
This talk is based on the preprint "Maxout polytopes", which is joint work with Andrei Balakin, Georg Loho and Bernd Sturmfels.