Preprint 73/2014

On the Number of Linear Regions of Deep Neural Networks

Guido Montúfar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio

Contact the author: Please use for correspondence this email.
Submission date: 29. Jul. 2014
Pages: 19
published in: NIPS'14 Proceedings of the 27th international conference on neural information processing systems - volume 2 ; Montreal, Quebec, Canada, December 8th-13th
Cambridge, MA : MIT Press, 2014. - P. 2924 - 2932
Bibtex
MSC-Numbers: 82C32, 68R99
Keywords and phrases: Deep learning, neural network, input space partition, rectifier, maxout
Download full preprint: PDF (4709 kB)

Abstract:
We study the complexity of functions computable by deep feedforward neural networks with piecewise linear activations in terms of the symmetries and the number of linear regions that they have. Deep networks are able to sequentially map portions of each layer's input-space to the same output. In this way, deep models compute functions that react equally to complicated patterns of different inputs. The compositional structure of these functions enables them to re-use pieces of computation exponentially often in terms of the network's depth. This paper investigates the complexity of such compositional maps and contributes new theoretical results regarding the advantage of depth for neural networks with piecewise linear activation functions. In particular, our analysis is not specific to a single family of models, and as an example, we employ it for rectifier and maxout networks. We improve complexity bounds from pre-existing work and investigate the behavior of units in higher layers.

18.10.2019, 02:15