Zusammenfassung für den Vortrag am 02.03.2023 (17:00 Uhr)

Math Machine Learning seminar MPI MIS + UCLA

Samy Wu Fung (Colorado School of Mines)
Using Hamilton Jacobi PDEs in Optimization
Siehe auch das Video dieses Vortrages.
Siehe auch die Vortragsfolien dieses Vortrages.

First-order optimization algorithms are widely used today. Two standard building blocks in these algorithms are proximal operators (proximals) and gradients. Although gradients can be computed for a wide array of functions, explicit proximal formulas are only known for limited classes of functions. We provide an algorithm, HJ-Prox, for accurately approximating such proximals. This is derived from a collection of relations between proximals, Moreau envelopes, Hamilton-Jacobi (HJ) equations, heat equations, and importance sampling. In particular, HJ-Prox smoothly approximates the Moreau envelope and its gradient. The smoothness can be adjusted to act as a denoiser. Our approach applies even when functions are only accessible by (possibly noisy) blackbox samples. Our approach can also be embedded into a zero-order algorithm with guaranteed convergence to global minima, assuming continuity of the objective function; this is done by leveraging connections between the gradient of the Moreau envelope and the proximal operator. We show HJ-Prox is effective numerically via several examples.


23.03.2023, 08:27