Search

Workshop

Universal approximation bounds for deep stochastic feedforward networks

  • Thomas Merkh (MPI MiS)
E1 05 (Leibniz-Saal)

Abstract

This work is focused on the representational power of deep stochastic feedforward neural networks. Of great interest in the graphical models and machine learning communities has been in understanding the representational capacity differences when using deep versus shallow architectures. Here, bounds on the width and depth of a deep network to be able to approximate any stochastic mapping from the set of inputs to the set of outputs arbitrarily well, are determined. Such a networkcan be regarded as a universal approximator. These bounds were determined by analyzing the network in terms of compositions of linear transformations of probability distributions expressible by the layers of the network. This analysis has revealed a spectrum of networks capable of universal approximation where as the layer width is decreased, the network depth must increase proportionally. Furthermore, an explicit construction of a network with the minimum number of trainable parameters necessary for universal approximation was determined.

Saskia Gutzschebauch

Max-Planck-Institut für Mathematik in den Naturwissenschaften Contact via Mail

Max Pfeffer

Max-Planck-Institut für Mathematik in den Naturwissenschaften, Leipzig