We have decided to discontinue the publication of preprints on our preprint server end of 2024. The publication culture within mathematics has changed so much due to the rise of repositories such as ArXiV (www.arxiv.org) that we are encouraging all institute members to make their preprints available there. An institute's repository in its previous form is, therefore, unnecessary. The preprints published to date will remain available here, but we will not add any new preprints here.
We introduce a new method for training GANs by applying the Wasserstein-2 metric proximal on the generators. This approach is based on the gradient operator induced by optimal transport theory, which connects the geometry of the sample space and the parameter space in implicit deep generative models. From this theory, we obtain an easy-to-implement regularizer for the parameter updates. Our experiments demonstrate that this method improves the speed and stability in training GANs in terms of wallclock time and Frechet Inception Distance (FID) learning curves.