Search

Workshop

An Approximation Theory for Metric Space-Valued Functions With A View Towards Deep Learning

Abstract

Motivated by the developing mathematics of deep learning, we build universal functions approximators of continuous maps between arbitrary Polish metric spaces X and Y using elementary functions between Euclidean spaces as building blocks. Earlier results assume that the output space Y is a topological vector space. We overcome this limitation by “randomization”: our approximators output discrete probability measures over Y. When X and Y are Polish without additional structure, we prove very general qualitative guarantees; when they have suitable combinatorial structure, we prove quantitative guarantees for Hölder-like maps, including maps between finite graphs, solution operators to rough differential equations between certain Carnot groups, and continuous non-linear operators between Banach spaces arising in inverse problems. In particular, we show that the required number of Dirac measures is determined by the combinatorial structure of X and Y. For barycentric Y, including Banach spaces, R-trees, Hadamard manifolds, or Wasserstein spaces on Polish metric spaces, our approximators reduce to Y-valued functions. When the Euclidean approximators are neural networks, our constructions generalize transformer networks, providing a new probabilistic viewpoint of geometric deep learning.

Katharina Matschke

Max Planck Institute for Mathematics in the Sciences Contact via Mail

Samantha Fairchild

Max Planck Institute for Mathematics in the Sciences

Diaaeldin Taha

Max Planck Institute for Mathematics in the Sciences

Anna Wienhard

Max Planck Institute for Mathematics in the Sciences