

Zusammenfassung für den Vortrag am 30.07.2020 (17:00 Uhr)
Math Machine Learning seminar MPI MIS + UCLARobert Peharz (TU Eindhoven)
Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters
Siehe auch das Video dieses Vortrages.
Reducing the memory footprint of machine learning models is an important goal, in order to make them amenable for embedded systems, mobile applications, and edge computing, as well as reducing their energy consumption. The classical approach is a typical pruning-quantization-coding pipeline, where pruning and quantization can be seen as heuristics to reduce the entropy of a deterministic weight vector, and for coding, Shannon-style schemes are used. In this talk, I present our recent work on a novel coding scheme – Minimal Random Code Learning (MIRACLE) – based on a variational approach and the classical bits-back argument. Rather than interpreting the model weights as a deterministic sequence, we devise an algorithm which draws a sample from the trained variational distribution, whose coding length directly corresponds to the Kullback-Leibler term in the variational objective. This allows us to explicitly control the compression rate, while optimizing the expected loss on the training set. Our method sets new state-of-the-art in neural network compression, as it strictly dominates previous approaches in a Pareto sense: On the benchmarks LeNet-5/MNIST and VGG-16/CIFAR-10, our approach yields the best test performance for a fixed memory budget, and vice versa, it achieves the highest compression rates for a fixed test performance.