Workshop

Learning through module categories.

  • Steve Oudot (École Polytechnique & Inria)
E1 05 (Leibniz-Saal)

Abstract

Modern machine and deep learning models spend much of their resources computing adequate representations of the data for the task at hand. These representations are obtained through embeddings that are learned from the training data, either in a supervised or in an unsupervised way. The idea advocated in this talk is that it may be fruitful to use embeddings that factor through some module category. These are very different in nature from the usual classes of embeddings, therefore they may capture information about the data that is different, and, to some extent, complementary. There are several important challenges to using such embeddings though, including: how to compute, differentiate, and optimize them. I will explain how these challenges are overcome in the case of persistence modules coming from topological data analysis, and through this special example I will provide some general guidelines that might apply to other module categories as well.

The content of the talk is based on arXiv preprint 2411.00493.

Katharina Matschke

Max Planck Institute for Mathematics in the Sciences Contact via Mail

Diaaeldin Taha

Max Planck Institute for Mathematics in the Sciences

Marzieh Eidi

MPI MIS & ScaDS.AI