Search

Talk

Using Hamilton Jacobi PDEs in Optimization

  • Samy Wu Fung (Colorado School of Mines)
Live Stream

Abstract

First-order optimization algorithms are widely used today. Two standard building blocks in these algorithms are proximal operators (proximals) and gradients. Although gradients can be computed for a wide array of functions, explicit proximal formulas are only known for limited classes of functions. We provide an algorithm, HJ-Prox, for accurately approximating such proximals. This is derived from a collection of relations between proximals, Moreau envelopes, Hamilton-Jacobi (HJ) equations, heat equations, and importance sampling. In particular, HJ-Prox smoothly approximates the Moreau envelope and its gradient. The smoothness can be adjusted to act as a denoiser. Our approach applies even when functions are only accessible by (possibly noisy) blackbox samples. Our approach can also be embedded into a zero-order algorithm with guaranteed convergence to global minima, assuming continuity of the objective function; this is done by leveraging connections between the gradient of the Moreau envelope and the proximal operator. We show HJ-Prox is effective numerically via several examples.

Links

seminar
05.12.24 19.12.24

Math Machine Learning seminar MPI MIS + UCLA

MPI for Mathematics in the Sciences Live Stream

Katharina Matschke

MPI for Mathematics in the Sciences Contact via Mail

Upcoming Events of this Seminar