The classical persistent homology transform was introduced in the field of topological data analysis about 10 years ago, and has since been proven to be a very powerful descriptor of Euclidean shapes. The transform sends a shape X to the map associating to each direction v on the sphere $S^{n-1}$ the persistent diagram with respect to the height function h_v. The transform has been shown to be injective (it is a sufficient shape statistic: probing a shape from each direction completely describes it), and for each shape it gives a continuous map from the sphere to the space of persistence diagrams.We introduce a generalised persistent homology transform (PHT) in which we consider arbitrary parameter spaces, and any filtration functions. In particular, we define the "distance-from-flat” PHT, where the parameter space is the Grassmannian AG(m,n) of affine subspaces of $R^n$, and the filtration functions d_P encode the distance from a given flat P. We prove that this version retains continuity and injectivity, while offering computational advantages over the classical PHT. In particular, homology in degree 0 suffices for the injectivity of the distance-from-line, so-called tubular, PHT, yielding an efficient tool that can outperform top neural networks in shape classification.
This is joint work with Adam Onus and Nina Otter.
The manifold hypothesis states that real-world data often lies close to a low-dimensional manifold. This implies that "data has a shape" and exhibits topological features such as holes and voids, but also geometrical features such as curvature. Manifold learning aims to exploit this to find meaningful low-dimensional representations and visualizations of high-dimensional data to gain further insights and use them for downstream tasks. In this talk, I discuss some fundamental ideas of representation learning and present preliminary results from an empirical study on how autoencoders with a persistent homology-based topological regularization term could be used to learn latent representations of data that are not only topologically aligned but also preserve the extrinsic curvature of the data.
Representational similarity analysis (RSA) is widely used to analyze the alignment between humans and neural networks; however, conclusions based on this approach can be misleading without considering the underlying representational geometry.
Our work introduces a framework using Ollivier Ricci Curvature and Ricci Flow to analyze the fine-grained local structure of representations. This approach is agnostic to the source of the representational space, enabling a direct geometric comparison between human behavioral judgments and a model's vector embeddings. We apply it to compare human similarity judgments for 2D and 3D face stimuli with a baseline 2D-native network (VGG-Face) and a variant of it aligned to human behavior.
Our results suggest that geometry-aware analysis provides a more sensitive characterization of discrepancies and geometric dissimilarities in the underlying representations that remain only partially captured by RSA. Notably, we reveal geometric inconsistencies in the alignment when moving from 2D to 3D viewing conditions. This highlights how incorporating geometric information can expose alignment differences missed by traditional metrics, offering deeper insight into representational organization.
Relative Entropy Policy Iteration is a reinforcement learning framework that alternates between policy evaluation and relative-entropy–regularized improvement. A prominent example is Maximum a Posteriori Policy Optimization (MPO), widely used in robotics and control. In this talk, I revisit the underlying principle of MPO from a theoretical perspective and suggest that its core idea may extend to much more general settings, including nonlinear utilities and continuous state-action spaces. The theory is still incomplete, but I will outline a possible mathematical framework and hope to gather feedback and ideas from the audience on how to formalize and analyze this perspective.
Networks are powerful tools for modeling interactions in complex systems. While traditional networks use scalar edge weights, many real-world systems involve multidimensional interactions. For example, in social networks, individuals often have multiple interconnected opinions that can affect different opinions of other individuals, which can be better characterized by matrices. We propose a general framework for modeling such multidimensional interacting dynamics: matrix-weighted networks (MWNs). We present the mathematical foundations of MWNs and examine consensus dynamics and random walks within this context. Our results reveal that the coherence of MWNs gives rise to non-trivial steady states that generalize the notions of communities and structural balance in traditional networks.