Search

Talk

Gaussian-Bernoulli RBMs Without Tears

  • Renjie Liao (University of British Columbia)
Live Stream

Abstract

We revisit the challenging problem of training Gaussian-Bernoulli restricted Boltzmann machines (GRBMs), introducing two innovations. We propose a novel Gibbs-Langevin sampling algorithm that outperforms existing methods like Gibbs sampling. We modified the contrastive divergence (CD) algorithm in two ways: 1) adding variance-dependent initial step sizes for negative sampling; 2) drawing initial negative samples from Gaussian noise. We show this modified CD along with gradient clipping is enough to robustly train GRBMs with large learning rates, thus removing the need for various tricks in the literature. Moreover, it enables GRBMs to generate samples starting from noise, thus allowing direct comparisons with deep generative models and improving evaluation protocols in the RBM literature. Experiments on Gaussian Mixtures, MNIST, FashionMNIST, and CelebA show GRBMs can generate good samples, despite their single-hidden-layer architecture. Our code is released: github.com/lrjconan/GRBM.

Links

seminar
5/2/24 5/16/24

Math Machine Learning seminar MPI MIS + UCLA

MPI for Mathematics in the Sciences Live Stream

Katharina Matschke

MPI for Mathematics in the Sciences Contact via Mail

Upcoming Events of This Seminar