Search
Workshop

A topological model for (partial) equivariance in data analysis and deep learning

  • Francesca Tombari (MPI MiS Leipzig, Leipzig, Germany)
E1 05 (Leibniz-Saal)

Abstract

One of the key reasons for the success of geometric deep learning is that it allows a significant dimensionality reduction thanks to the introduction of an inductive bias based on prior knowledge of the symmetries of the data studied. Parallel to the development of geometric deep learning techniques, the theory of group equivariant non-expansive operators (GENEO) has taken shape. GENEOs give a topological model to encode equivariance and can be used to design layers of neural networks. After introducing GENEOs and some applications related to them, we will discuss a generalisation to partial equivariance. The main reason for this is that, due to noise and incompleteness of data, the symmetries of a data set cannot always be modelled as a group. Thus, we need to generalise GENEOs and allow for partial equivariance to be taken into account. The space of partially equivariant non-expansive operators (P-GENEOs) maintains most of the nice features of the space of GENEO, such as compactness and convexity.

Katharina Matschke

Max Planck Institute for Mathematics in the Sciences Contact via Mail

Samantha Fairchild

Max Planck Institute for Mathematics in the Sciences

Diaaeldin Taha

Max Planck Institute for Mathematics in the Sciences

Anna Wienhard

Max Planck Institute for Mathematics in the Sciences