Search

Research Group

Learning and Inference

We focus on the mathematical foundations of inference and learning. We develop methodological innovations for inference and learning and apply them to biology, biomedicine, and clinical challenges. Our goal is to advance these fields through rigorous mathematical analysis.

Research

The Learning and Inference group is located at MPI MiS and the Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI) at Universität Leipzig. It is directed by Sayan Mukherjee, Alexander von Humboldt Professor in AI.

The research foci are the mathematical foundations of inference and learning, methodological innovations for inference and learning, and applications to biology, biomedicine, and clinical challenges.

Mathematical foundations of inference and learning

  1. The interface of geometry and topology with probability and statistics – There will be two themes driving this research.
    The first theme is importing ideas from modern geometry and topology to probabilistic modeling.
    The second theme is understanding the geometry and topology of random processes.
  2. Inference and dynamics – There will be two themes driving this research.
    The first theme is understanding the limits of inference and learning of dynamical systems.
    The second theme is developing inferential theory by considering inference as a dynamical system.

Methodological innovations for inference and learning

  1. Integrating modern AI with stochastic modeling – Bayesian inference has been tremendously successful in a variety of applied problems across biological, social, and physical sciences. A key feature of Bayesian inference is a principled way of quantifying uncertainty in the inferential procedure. Recently modern AI methods such as deep neural networks have transformed predictive modeling, especially in the data-rich setting. We will integrate Bayesian (stochastic) modeling with modern AI (deep neural networks).
  2. Inferential algorithms that scale – As more data is collected and as each observation contains more and richer measurements, classical inferential procedures fail to scale. The failure is especially problematic if there is a requirement to quantify uncertainty. The standard workhorse for Bayesian inference is Markov chain Monte Carlo (MCMC) and does not scale to massive data.

Members

Publications