Zusammenfassung für den Vortrag am 09.11.2022 (13:00 Uhr)

Seminar on Nonlinear Algebra

Hanna Tseran (MPI MiS, Leipzig)
Expected Complexity and Gradients of Maxout Networks

Learning with neural networks relies on the complexity of the representable functions but, more importantly, the particular assignment of typical parameters to functions of different complexity. Taking the number of activation regions as a complexity measure, we show that the practical complexity of networks with maxout activation functions, which correspond to tropical rational maps, is often far from the theoretical maximum. Continuing the analysis of the expected behavior, we study the expected gradients of a maxout network with respect to inputs and parameters and obtain bounds for the moments depending on the architecture and the parameter distribution. Based on this, we formulate parameter initialization strategies that avoid vanishing and exploding gradients in wide networks.

 

11.11.2022, 00:10