Kirchhoff's celebrated matrix tree theorem expresses the number of spanning trees of a graph as the maximal minor of the Laplacian matrix of the graph. In modern language, this determinantal counting formula reflects the fact that spanning trees form a regular matroid. In this talk, I will discuss some consequences of this perspective for the study of a related quantity from electrical circuit theory: the effective resistance. I will give a new characterization of effective resistances in terms of a certain polytope and discuss applications to recent work on discrete notions of curvature based on the effective resistance.
Existing error-bounded lossy compression techniques control the pointwise error during compression to guarantee the integrity of the decompressed data. However, they typically do not explicitly preserve the topological features in data. When performing post hoc analysis with decompressed data using topological methods, preserving topology in the compression process to obtain topologically consistent and correct scientific insights is desirable. In this talk, we will discuss a couple of lossy compression methods that preserve the topological features in 2D and 3D scalar fields. Specifically, we aim to preserve the types and locations of local extrema as well as the level set relations among critical points captured by contour trees in the decompressed data. This talk is based on joint works with Lin Yan, Xin Liang, Hanqi Guo, and Nathan Gorski.
Dimensionality reduction is a crucial technique in data analysis and machine learning, enabling the simplification of complex high-dimensional datasets while preserving their intrinsic structures.
In this talk we will present the mathematical footings of several prominent dimensionality reduction methods: Principal Component Analysis (PCA), Isomap, Laplace Eigenmaps, …
We will explore the specific optimization objectives and the role of weight assignments within k-neighborhood graphs for each method. By examining the theoretical frameworks and optimization processes, we aim to provide a comprehensive understanding of how these techniques transform metric relationships within data into meaningful lower dimensional representations. Insights into the mathematical principles that drive these algorithms highlight their unique approaches to capturing and preserving data structures.
Chebyshev varieties are algebraic varieties parametrized by Chebyshev polynomials. They arise naturally when solving polynomial equations expressed in the Chebyshev basis. More precisely, when passing from monomials to Chebyshev polynomials, Chebyshev varieties replace toric varieties. I will introduce these objects, discuss their defining equations and present key properties. Via examples, I will motivate their use in practical computations. This is joint work with Zaïneb Bel-Afia and Chiara Meroni.
I will start by overviewing kernel methods in machine learning, and how the simple kernel trick allows one to effortlessly turn intuitive linear methods into non-linear ones. While these methods can seem mysterious, I’ll try to give insight into the geometry that arises, especially in kernel SVM. This will lead into kernel range spaces, which describes all the ways one can inspection a data set with a kernel. From there I will discuss approximation of these with coresets, and just approximating the spaces themselves which leads to surprising results in high dimensions.
Graph structured data appears in many different context and with this comes the need for tools to analyse them. One of the most common tools to study data sets of graphs are graph neural networks (GNNs). However, to many of us GNNs remain a black box that magically perform predictions about graphs. In this lecture we will learn about the basics of GNNs, possible generalizations and research directions.
Attention: this talk is postponed to a later date.
Modern day neural networks are amazing prediction machines, but to get at explanations one has to understand higher order relations between data as they fiber over their predictions. In this talk I will connect the urgent questions of modern data science with the distinguished history of applied topology by considering simple geometric examples and probing them with increasingly complicated tools. Ideas from dynamics, stratification theory and sheaf theory will be introduced in a loose and intuitive fashion to trace future directions for research.