Search

Workshop

Kernel Distances for Deep Generative Models

  • Michael Arbel (Gatsby Computational Neuroscience Unit, University College London)
E1 05 (Leibniz-Saal)

Abstract

Generative adversarial networks (GANs) achieve state-of-the-art performance for generating high quality images.

Key to GAN performance is the critic, which learns to discriminate between real and artificially generated images. Various divergence families have been proposed for such critics, including f-divergences (the f-gan family) and integral probability metrics (the Wasserstein and MMD GANs). In recent GAN training approaches, these critic divergence measures have been learned using gradient regularisation strategies, which have contributed significantly to their success.

In this talk, we will introduce and analyze a data-adaptive gradient gradient penalty as a critic regularizer for the MMD GAN. We propose a method to constrain the gradient analytically and relate it to the weak continuity of a distributional loss functional. We also demonstrate experimentally that such a regularized functional improves on the existing state of the art methods for unsupervised image generation on CelebA and ImageNet.

Based on joint work with Dougal Sutherland, Mikołaj Bińkowski, and Arthur Gretton.

Links

Valeria Hünniger

Max-Planck-Institut für Mathematik in den Naturwissenschaften Contact via Mail

Guido Montúfar

Max Planck Institute for Mathematics in the Sciences