Abstract for the talk on 01.10.2020 (17:00 h)

Math Machine Learning seminar MPI MIS + UCLA

David Rolnick (McGill University & Mila)
Expressivity and learnability: linear regions in deep ReLU networks
See the video of this talk.

In this talk, we show that there is a large gap between the maximum complexity of the functions that a neural network can express and the expected complexity of the functions that it learns in practice. Deep ReLU networks are piecewise linear functions, and the number of distinct linear regions is a natural measure of their expressivity. It is well-known that the maximum number of linear regions grows exponentially with the depth of the network, and this has often been used to explain the success of deeper networks. We show that the expected number of linear regions in fact grows polynomially in the size of the network, far below the exponential upper bound and independent of the depth of the network. This statement holds true both at initialization and after training, under natural assumptions for gradient-based learning algorithms. We also show that the linear regions of a ReLU network reveal information about the network’s parameters. In particular, it is possible to reverse-engineer the weights and architecture of an unknown deep ReLU network merely by querying it.

 

05.10.2020, 15:32