The manifold hypothesis states that real-world data often lies close to a low-dimensional manifold. This implies that "data has a shape" and exhibits topological features such as holes and voids, but also geometrical features such as curvature. Manifold learning aims to exploit this to find meaningful low-dimensional representations and visualizations of high-dimensional data to gain further insights and use them for downstream tasks. In this talk, I discuss some fundamental ideas of representation learning and present preliminary results from an empirical study on how autoencoders with a persistent homology-based topological regularization term could be used to learn latent representations of data that are not only topologically aligned but also preserve the extrinsic curvature of the data.
Representational similarity analysis (RSA) is widely used to analyze the alignment between humans and neural networks; however, conclusions based on this approach can be misleading without considering the underlying representational geometry.
Our work introduces a framework using Ollivier Ricci Curvature and Ricci Flow to analyze the fine-grained local structure of representations. This approach is agnostic to the source of the representational space, enabling a direct geometric comparison between human behavioral judgments and a model's vector embeddings. We apply it to compare human similarity judgments for 2D and 3D face stimuli with a baseline 2D-native network (VGG-Face) and a variant of it aligned to human behavior.
Our results suggest that geometry-aware analysis provides a more sensitive characterization of discrepancies and geometric dissimilarities in the underlying representations that remain only partially captured by RSA. Notably, we reveal geometric inconsistencies in the alignment when moving from 2D to 3D viewing conditions. This highlights how incorporating geometric information can expose alignment differences missed by traditional metrics, offering deeper insight into representational organization.
Relative Entropy Policy Iteration is a reinforcement learning framework that alternates between policy evaluation and relative-entropy–regularized improvement. A prominent example is Maximum a Posteriori Policy Optimization (MPO), widely used in robotics and control. In this talk, I revisit the underlying principle of MPO from a theoretical perspective and suggest that its core idea may extend to much more general settings, including nonlinear utilities and continuous state-action spaces. The theory is still incomplete, but I will outline a possible mathematical framework and hope to gather feedback and ideas from the audience on how to formalize and analyze this perspective.
Networks are powerful tools for modeling interactions in complex systems. While traditional networks use scalar edge weights, many real-world systems involve multidimensional interactions. For example, in social networks, individuals often have multiple interconnected opinions that can affect different opinions of other individuals, which can be better characterized by matrices. We propose a general framework for modeling such multidimensional interacting dynamics: matrix-weighted networks (MWNs). We present the mathematical foundations of MWNs and examine consensus dynamics and random walks within this context. Our results reveal that the coherence of MWNs gives rise to non-trivial steady states that generalize the notions of communities and structural balance in traditional networks.