Search

Workshop

Two Aspects of Graph Neural Networks and Hyperbolic Geometry

E1 05 (Leibniz-Saal)

Abstract

Graph neural networks have two different data modes - the feature vectors and the graph structure. In this talk, we explore the interaction between hyperbolic geometry and the two different data formats.

For the feature vectors, we explore the effect of geometry on the performance of three types of GNNs for node classification and link prediction. To do so, we propose a family of mixed geometry GNNs with Hyperbolic and Euclidean components that can be trained jointly. We compare the performance of our mixed geometry models against their Euclidean and Hyperbolic counterparts across various datasets. We see that the mixed geometry models have the best performance for node classification, while the Hyperbolic models have the best performance for link prediction. Further, we see that for node classification, the choice of architecture had a more significant impact on the performance than the choice of geometry. Whereas for link prediction, the choice of geometry had a much more significant impact than the choice of architecture.

For the graph structure, we study the problem of oversquashing. We use ideas from geometric group theory to present RelWire, a rewiring technique based on the geometry of the graph. We derive topological connections for RelWire. We then rewire different real-world molecule datasets and show that RelWire is Pareto optimal: it has the best balance between improvement in eigengap and commute times and minimizing changes in the topology of the underlying graph.

Katharina Matschke

Max Planck Institute for Mathematics in the Sciences Contact via Mail

Samantha Fairchild

Max Planck Institute for Mathematics in the Sciences

Diaaeldin Taha

Max Planck Institute for Mathematics in the Sciences

Anna Wienhard

Max Planck Institute for Mathematics in the Sciences