Search

Talk

Optimization, Robustness and Privacy. A Story through the Lens of Concentration

  • Simone Bombari (IST Austria)
Live Stream

Abstract

High dimensional probability is a powerful tool to understand phenomena that typically occur in deep learning models. In particular, in this talk, we discuss different concentration bounds that entail consequences about (i) the gradient descent optimization of a deep neural network, (ii) the adversarial robustness of its solution, and (iii) its privacy guarantees. The first main result implies that minimally overparameterized deep neural networks can be successfully (0 training loss) optimized with gradient descent, and leverages tight lower bounds on the smallest eigenvalue of the neural tangent kernel (NTK). The second builds on the recently proposed universal law of robustness, provides sharper bounds on the random features and NTK model, and consequently addresses the conjecture opened by Bubeck, Li and Nagaraj. Finally, we draw a connection between the generalization performance of a model and its privacy guarantees, measured in terms of its safety against a family of information recovery black-box attacks.

Links

seminar
5/2/24 5/16/24

Math Machine Learning seminar MPI MIS + UCLA

MPI for Mathematics in the Sciences Live Stream

Katharina Matschke

MPI for Mathematics in the Sciences Contact via Mail

Upcoming Events of This Seminar