Geometric Aspects of Graphical Models and Neural Networks
- Nihat Ay
- Guido Montúfar
Abstract
The first part of this course introduces the basic theory of finite random fields and corresponding geometric structures studied in information geometry. Topics include sampling algorithms and their convergence properties. Graphical models (an important class of random fields) will be discussed in more detail. The second part of the course deals with neural networks. We discuss Boltzmann machines (a kind of stochastic neural networks) and elaborate on related network architectures that have become prominent in machine learning applications (including restricted Boltzmann machines, deep belief networks, and deep Boltzmann machines). We will concentrate on geometric aspects of the different networks, addressing, in particular, the geometry of their parametrization and their expressive power.
Date and time info
Wednesday 10.15 - 11.45
Keywords
Information theory, Boltzmann machines, geometric aspects of networks
Audience
MSc students, PhD students, Postdocs
Language
English