Search
Talk

Geometric approximate inference for Bayesian neural networks

  • Georgios Arvanitidis (Technical University of Denmark)
Live Stream

Abstract

Uncertainty quantification in Bayesian deep learning is typically achieved by characterizing the posterior distribution of neural network weights given the observed data; however, exact inference is in general computationally intractable. At the same time, in the overparametrized regime, the parameter space exhibits nonlinear reparametrization invariances, implying that distinct parameter configurations correspond to the same function. Standard approximate inference methods that operate in parameter space do not explicitly account for this nonlinear structure, often leading to poor approximations of the true posterior, particularly along directions associated with these invariances.

Recent approaches leverage geometric insights to construct approximations that better align with the intrinsic structure of the parameter space. These methods provide scalable alternatives that can yield better local approximations of the posterior. In this talk, we introduce the role of geometry in approximate Bayesian inference, present representative methods, and discuss future research directions.

Upcoming Events of this Seminar