Search

Talk

Theoretical Properties of Feedforward Networks, Factorized Mutual Information, and more

  • Thomas Merkh (MPI MiS, Leipzig + Department of Mathematics, UCLA)
A3 01 (Sophus-Lie room)

Abstract

In this talk, I will be presenting on several projects that I have taken a part in during my PhD studies. First I will discuss the set of joint probability distributions that maximize multi-information over a collection of margins. This quantity called the "Factorized Mutual Information" (FMI) has been used as a computationally efficient proxy for the global mutual information (MI) in the context of intrinsic rewards in embodied reinforcement learning. A comparison between the FMI maximizers and the MI maximizers will be discussed. Second, I will review recent improvements on the sufficiency bounds for deep stochastic feedforward networks to be universal approximators of Markov kernels. This work can be seen as extending the investigations of the representational power of stochastic networks, such as Deep Belief Networks and Restricted Boltzmann Machines. Last, I will preview some ongoing work relating to the approximation properties of Convolutional Neural Networks. These networks can be viewed as a special families of parameterized piecewise linear functions, and counting the maximum number of linear regions attainable for a ReLU network has previously been used to quantify the network's approximation flexibility.

Links

Katharina Matschke

MPI for Mathematics in the Sciences Contact via Mail