Search

Talk

Selective Forgetting and Mixed Privacy in Deep Networks

  • Aditya Golatkar (UCLA)
Live Stream

Abstract

We introduce and explore the problem of Selectively Forgetting a particular subset of the data used for training a deep neural network. We propose an initial method for "scrubbing'" the weights clean by minimizing the Forgetting Lagrangian, such that any probing function of the weights is indistinguishable from the same function applied to the weights of a network trained without the data to be forgotten. This method is improved upon by using a deterministic update of the linearized version of the model inspired by NTKs, which enables us to bound the extracted information in a black-box setting. This inspired us to propose Linear Quadratic Fine-tuning (LQF), which is the first method for linearizing a pre-trained model which achieves comparable performance to non-linear fine-tuning. LQF allows us to exploit the strength of deep neural networks while enjoying the theoretical properties of convex optimization. Using LQF, we introduce a novel notion of forgetting in the Mixed-Privacy setting which enables us to perform forgetting for large scale vision tasks, while providing theoretical guarantees. We further extend this mixed privacy setting to Differential Privacy (DP) and introduce AdaMix, an adaptive DP algorithm which exploits few-shot public data to improve the privacy trade-off for practical vision tasks.

Links

seminar
05.12.24 19.12.24

Math Machine Learning seminar MPI MIS + UCLA

MPI for Mathematics in the Sciences Live Stream

Katharina Matschke

MPI for Mathematics in the Sciences Contact via Mail

Upcoming Events of this Seminar