Learning disentangled representations
- Francesco Locatello (Intern at Google Brain Amsterdam, PhD Student at ETH Zurich and Max Planck Institute for Intelligent Systems)
The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by few explanatory factors of variation (e.g. content + position of objects in an image) which can be recovered by unsupervised learning algorithms. A recent line of work argued that learning such representations offer several benefits, ranging from reduced sample complexity of downstream tasks to interpretability.
In this talk, I will discuss the recent progress in the field and challenge some common assumptions. I will address the role of inductive biases in the theoretical impossibility of the unsupervised learning of disentangled representations and provide a sober look at the performances of state-of-the-art approaches. I will further present how to go beyond the purely unsupervised setting in both theory and practice and discuss concrete applications to fairness and abstract visual reasoning.