Grounded Representation Learning Through Equivariant Deep Learning
AI systems often need to process data that finds its origins in our real physical world, and is thus inherently grounded in geometry and physics. It is then evident that neural architectures capable of preserving this grounding in their representations excel in tasks involving critical geometric aspects. This observation is particularly noticeable in domains such as computational chemistry, physics, and fields like medical image analysis.
In this talk, I will argue that equivariant neural networks are precisely the class of architectures that excel in preserving this notion of geometric and physical grounding. I will present the idea that these networks can be viewed as creating neural ideograms —geometric symbols that can represent both literal geometric objects (pictograms) and more abstract concepts (ideograms).
Furthermore, I will showcase specific examples of equivariant methods that employ geometric objects (such as multi-vectors and spherical harmonics/irreps) as internal hidden states of neurons, in contrast to the conventional use of standard Euclidean vectors. I will argue that the use of such internal states for representing geometric information allows neural networks to excel in tasks demanding a form of geometric reasoning. To illustrate this, I will show an application of these techniques in the generative modelling of molecules.
As a technical contribution, in addition to the more conceptual ideas mentioned above, I will present a simple recipe to build equivariant architectures based on a generalized definition of weight sharing as conditional message passing, with interaction functions conditioned on pair-wise invariant attributes that represent equivalence classes of point-pairs.