Search

Workshop

Unlocking Neural Composition with Relative Representations

Abstract

Ideally, the distribution of the latent representations within any neural network should depend only on the task, the data, the loss, and other architecture-specific constraints. However, factors such as the random weights initialization, hyperparameters, or other sources of randomness may induce incoherent latent spaces that hinder any form of reuse. In this talk I'll report an important (and somewhat surprising) empirical observation: under the same data and modeling choices, distinct latent spaces typically differ by an unknown quasi-isometric transformation; that is, in each space, the distances between the encodings do not change. I'll then show how simply adopting pairwise similarities as an alternative data representation leads to guaranteed isometry invariance of the latent spaces, effectively enabling latent space communication: from zero-shot model stitching to latent space comparison between diverse settings. Several validation experiments will follow on different datasets, spanning various modalities (images, text, graphs), tasks (e.g., classification, reconstruction) and architectures (e.g., CNNs, GCNs, transformers).

Links

Katharina Matschke

Max Planck Institute for Mathematics in the Sciences Contact via Mail

Samantha Fairchild

Max Planck Institute for Mathematics in the Sciences

Diaaeldin Taha

Max Planck Institute for Mathematics in the Sciences

Anna Wienhard

Max Planck Institute for Mathematics in the Sciences