Forman's discrete Morse theory is a combinatorial adaptation of classical Morse theory on manifolds to cell complexes. We introduce a discrete analogue of the stratified Morse theory of Goresky and MacPherson. We describe the basics of this theory and prove fundamental theorems relating the topology of a general simplicial complex with the critical simplices of a discrete stratified Morse function on the complex. We also provide an algorithm that constructs a discrete stratified Morse function out of an arbitrary function defined on a finite simplicial complex; this is different from simply constructing a discrete Morse function on such a complex. We then give simple examples to convey the utility of our theory. Furthermore, we relate our theory with the classical stratified Morse theory in terms of triangulated Whitney stratified spaces. This is a joint work with Kevin Knudson. If time permits, we will discuss some recent efforts in expanding the above theory.

I consider the problem of detecting the group of vertices more similar to a given one in a graph/network. Starting from measures defined in the literature I analyze their main drawbacks. Then, I propose a metric based on the communicability cosine angle between pairs of vertices in a network. I then define a similarity measure based on it and illustrate that it solves the problems existing with previous measures. I then define a measure based on this similarity measure, which quantifies the communicability closeness centrality (CCC) of a vertex to the rest in a network. Using it I approach the problem of distinguishing all pairs of vertices which are not automorphically equivalent in a network. Finally, I illustrate the use of the CCC in ranking vertices in real-world networks, and illustrate its main differences with existing centrality measures like the degree, closeness, betweenness and eigenvector centrality.

It is known that the symplectic property is preserved by the mean curvature flow in a Kähler-Einstein surface which is called “symplectic mean curvature flow”. It was proved that there is no finite time Type I singularities for the symplectic mean curvature flow. We will talk about recent progress on an important Type II singularity of symplectic mean curvature flow-symplectic translating soliton. We will show that a symplectic translating soliton must be a plane under some natural assumptions which are necessary by investigating some examples.

What is a time-varying graph, or a time-varying topological space and more generally what does it mean for a mathematical structure to vary over time? Here we sow the seeds of a general theory of temporal data by introducing categories of narratives. These are sheaves on posets of intervals of time which specify snap-shots of a temporal object as well as relationships between them. This theory satisfies five desiderata distilled from the burgeoning field of time-varying graphs: (D1) any theory of temporal data should define not only time-varying objects, but also appropriate morphisms thereof; (D2) in contrast to being a mere sequence, temporal data should explicitly record whether it is to be viewed cumulatively or persistently. Furthermore there should be methods of conversion between these two viewpoints; (D3) any theory of temporal data should come equipped with systematic ways of lifting static notions to their appropriate temporal analogues; (D4) theories of temporal data should be object agnostic and applicable to any mathematical structure; (D5) any theory of temporal data should be seamlessly interoperable with theories of dynamical systems. In summary, our theory of narratives provides a consistent and general framework for studying mathematical structures which change over time. This is a first step towards a unified theory of time-varying data.
Join work with: Benjamin Merlin Bumpus, James Fairbanks, Martti Karvonen, and Frédéric Simard

What is a time-varying graph, or a time-varying topological space and more generally what does it mean for a mathematical structure to vary over time? Here we sow the seeds of a general theory of temporal data by introducing categories of narratives. These are sheaves on posets of intervals of time which specify snap-shots of a temporal object as well as relationships between them. This theory satisfies five desiderata distilled from the burgeoning field of time-varying graphs: (D1) any theory of temporal data should define not only time-varying objects, but also appropriate morphisms thereof; (D2) in contrast to being a mere sequence, temporal data should explicitly record whether it is to be viewed cumulatively or persistently. Furthermore there should be methods of conversion between these two viewpoints; (D3) any theory of temporal data should come equipped with systematic ways of lifting static notions to their appropriate temporal analogues; (D4) theories of temporal data should be object agnostic and applicable to any mathematical structure; (D5) any theory of temporal data should be seamlessly interoperable with theories of dynamical systems. In summary, our theory of narratives provides a consistent and general framework for studying mathematical structures which change over time. This is a first step towards a unified theory of time-varying data.
Join work with: Benjamin Merlin Bumpus, James Fairbanks, Martti Karvonen, and Frédéric Simard

We introduce quantum information processing and the related physics and mathematics, including quantum coherence, quantum correlations (quantum discord, quantum entanglement, quantum steering, quantum non-locality), wave-particle duality, quantum uncertainty relations, as well as quantum algorithms for noisy intermediate-scale quantum processors.

The modular operad encodes in a combinatorial way how nodal curves in the boundary of the moduli space of stable curves can be obtained by glueing smooth curves along marked points. In this talk, I will present a generalization of the modular operad to moduli spaces of SUSY curves (or super Riemann surfaces). This requires colored graphs and generalized operads in the sense of Borisov-Manin.

Tubular neighborhood of an outer-isometric image of a metric space X in an injective metric space is homotopy equivalent to the Vietoris-Rips complex on X. I will discuss how the homotopy type of neighborhoods (or VR-complexes) of spheres depends on the radius of the neighborhood and what threshold radii are.

Let $\Omega$ be a compact subset of a complete Riemannian manifold $(N,h)$. Suppose there exists a $C^2$-function $f: \Omega \longrightarrow \mathbb{R}$ which is strictly convex in the geodesic sense. It is a classical consequence of the maximum principle then that every harmonic map $u: (M,g) \longrightarrow (N,h)$, where $(M,g)$ is complete compact without boundary, with $u(M) \subset \Omega$ must be a constant map. Is the inverse of this Maximum Principle true? I.e. suppose $\Omega$ is a subset of $(N,h)$ now with the following property: •) every harmonic map $u: (M,g) \longrightarrow (N,h)$, where $(M,g)$ is complete compact without boundary, with $u(M) \subset \Omega$ must be a constant map. Natural question: Is the existence of a strictly convex function defined on $\Omega$ what is preventing non-constant harmonic maps to exist inside this subset? In this talk I will present a counter example to this inverse maximum principle and comment on a result by M. Gromov where he proves that a certain weaker version of this question is true for the case of minimal hypersurfaces on a class of manifolds called “thick at infinity”.

Differential geometric approaches are ubiquitous in several fields of mathematics, physics and engineering. The Forman-Ricci curvature (FRC) is known for its high capacity for extracting geometric information from complex networks. However, extracting information from dense networks is still challenging due to the combinatorial explosion of high-order network structures. Motivated by this challenge we sought a set-theoretic representation theory for high-order network cells and FRC as an alternative and efficient formulation for computing high-order FRC in complex networks. We provide a pseudo-code, a software implementation coined FastForman, as well as a benchmark comparison with alternative implementations. As a consequence, our findings open new research possibilities in complex systems where higher-order geometric computations are required.

The study of large complex networks often involves the partition of nodes into groups based on their connectivity. In the study of online social networks, this partition is used to identify users that belong to the same community or that play a similar role in the organization of the network. The majority of computational methods to obtain such partitions divides the nodes in assortative communities in which nodes link preferentially to other nodes in the same community. In this talk I will argue that (i) the a priori focus on assortative structures has led to the wrong impression that this is the only relevant structure in networks; (ii) that other non-assortative structures are ubiquitous in complex networks and important to understand their organization. These conclusions is based on a methodology we introduce to classify the relationship between groups of nodes in networks, which is applied systematically to 52 different networks and to two social-media networks (as case studies). Reference: C. X. Liu, T. J. Alexander, and E.G. Altmann: "Non-assortative relationships between groups of nodes are typical in complex networks" , PNAS Nexus, pgad364 (2023), https://academic.oup.com/pnasnexus/article/2/11/pgad364/7367864

The Gromov-Hausdorff distance is a fundamental tool in Riemannian geometry (through the topology it generates). It is also utilized in Applied Geometry and Topological Data Analysis as a metric for expressing the stability of methods which process geometric data (e.g. hierarchical clustering and persistent homology barcodes via the Vietoris-Rips filtration).
Whereas it is often easy to estimate the value of the Gromov-Hausdorff distance between two given metric spaces, its precise value is rarely easy to determine. Some of the best estimates follow from considerations related to both the stability of persistent homology features and to Gromov's filling radius. However, these turn out to be non-sharp.
In this talk, I will describe these estimates and also results which permit calculating the precise value of the Gromov-Hausdorff between pairs of spheres (endowed with their usual geodesic distance). These results involve lower bounds which arise from a certain version of the Borsuk-Ulam theorem that is applicable to discontinuous maps, and also matching upper bounds which are induced from specialized constructions of (a posteriori optimal) "correspondences" between spheres.

What is Information Geometry? A typical answer to this question is that Information Geometry is the application of differential geometric methods to the investigation of Information Theory. This leads, depending on the specific information-theoretical task one is considering, to the study of a seemingly quite inhomogeneous collection of Riemannian structures. Some typical examples being the interior of a simplex equipped with the Fisher-Rao metric, or the Bloch ball equipped with the Bures-Helstrom metric. A question that is then quite natural to ask is whether these structures have some common features, or if they can be seen as different instances of the same general structure. As we will see in this talk, following this aim will bring us to construct a formalism for Information Geometry that has at its center the notion of $W*$-algebras.

One major aspect in our research is to understand the role of variation in helping to modulate the information content of linguistic units, achieving optimization effects for efficient communication within the realm of scientific writing. By situating our inquiry in this controlled environment, we gain nuanced insights into how language variation and change contribute to the dynamism of information exchange in scholarly discourse.
I will present how we can detect and analyze variation and change in language use with data-driven methods that apply information-theoretic concepts without a preselection of theoretically motivated linguistic features. Traditional linguistic research often requires the preselection of specific linguistic features based on theoretical motivations. However, our method bypasses this by focusing on inherent patterns in the data itself. This enables us to uncover latent linguistic variations and changes that might not be immediately evident or might even be overlooked in traditional analyses.
While the primary focus of our study revolves around linguistic features, the methodologies we employ hold a broader applicability. The data-driven and information-theoretic methods we utilize can be extended to analyze other types of features, revealing change and variation in different contexts. This versatility underscores the potential of this approach, not just in linguistic studies but in any domain where patterns of change and evolution are of interest and where probabilities of these changes can be generated.
A notable facet of this methodology is its ability to capture the inherent asymmetry in linguistic variation and change. In many linguistic processes, the directionality matters. This becomes especially pertinent when studying knowledge evolution. By capturing this asymmetric nature, our models can offer insights into how knowledge diffuses across interdisciplinary settings. For instance, the flow of knowledge between two closely related disciplines might be smoother than between two vastly different ones. This asymmetry, which is integral in a communicative context, can shed light on the genesis and evolution of disciplines, potentially pinpointing moments when the divergence in knowledge led to the birth of a new discipline.
While our methods help us identify significant linguistic shifts, qualitative methods are crucial to understand the reasons behind these shifts, which besides communicative aims could also be related to extra-linguistic factors (e.g. change in language use during the chemical revolution). Thus, while our methods can tell us "what" has changed, they do not tell us necessarily "why". In accordance with communicative accounts, we are interested in tracing effects of variation that modulate the information content transmitted leading to optimization effects. For this we apply additional information-theoretic notions such as entropy within paradigms and surprisal as a measure of predictability in context. On top, traditional historical research would provide the contextual depth to understand the reasons behind the linguistic changes we trace.

The three questions originally proposed by Immanuel Kant are used to reflect about the aims and scope of sociophysics. We start from the historical perspective, to highlight recent contributions from physics for a better understanding of social systems. This is followed by a critical attempt what is missing in these contributions. The main part of the talk illustrates current developments in sociophysics, with an emphasis on our own work, to analyze, simulate, and influence social systems.
Literature: Frank Schweitzer, Sociophysics, Physics Today vol. 71, issue 2, pp. 40-46 (2018)
Free download: https://doi.org/10.1063/PT.3.3845

We provide the notions of connection $1$-forms and curvature $2$-forms on graphs. We prove a Weitzenböck formula for connection Laplacians in this setting. We also define a discrete Yang-Mills functional and study its Euler-Lagrange equations.

Synchronization phenomena on networks have attracted much attention in studies of neural, social, economic, and biological systems, yet we still lack a systematic understanding of how relative synchronizability relates to underlying network structure. Indeed, this question is of central importance to the key theme of how dynamics on networks relate to their structure more generally. We present an analytic technique to directly measure the relative synchronizability of noise-driven time-series processes on networks, in terms of the directed network structure. We consider both discrete-time auto-regressive processes and continuous-time Ornstein-Uhlenbeck dynamics on networks. Our technique builds on computation of the network covariance matrix in the space orthogonal to the synchronized state, enabling it to be more general than previous work in not requiring either symmetric (undirected) or diagonalizable connectivity matrices, and allowing arbitrary self-link weights. More importantly, our approach quantifies the relative synchronisation specifically in terms of the contribution of process motif (or directed walk) structures. We demonstrate that in general the relative abundance of process motifs with convergent directed walks (including feedback and feedforward loops) hinders synchronizability. We also reveal subtle differences between the motifs involved for discrete or continuous-time dynamics. Our insights analytically explain several known general results regarding synchronizability of networks, including that small-world and regular networks are less synchronizable than random networks.

This talk focuses on the effects of noise and heterogeneity (diversity) on neuronal dynamics, with a particular emphasis on self-induced stochastic resonance (SISR). The previous literature has analyzed the combined effects of noise and diversity in neural dynamics and mainly concluded that adding optimal diversity on top of noise will further enhance resonance phenomena (including stochastic resonance, coherence resonance, and even synchronization) caused by noise alone, i.e., the role of optimal diversity is always constructive. The first part of the talk challenges this conclusion by demonstrating that the effect of diversity on self-induced stochastic resonance can only be antagonistic. The second part of the talk discusses the mathematical difficulties of analyzing SISR in n-dimensional (n>2) dynamical systems without a slow-fast structure; in particular, we shall discuss this using the Hodgin-Huxley neuron model. Finally, if time allows, we shall briefly discuss potential applications of noise-induced resonance phenomena in enhancing bio-inspired machine learning based on reservoir computing.

It is well-known that shadowing holds in a neighborhood of a hyperbolic set. It is known that shadowing can hold for non hyperbolic systems, but due to results of Sakao, Abdenur, Diaz, Pilyugin, Tikhomirov shadowing is "almost" equivalent to structural stability. At the same time numerical experiments by Hammel-Grebogi-Yorke for logistics and Henon maps shows that shadowing holds for relatively long pseudotrajectories. It poses a question which type of shadowing holds for systems, which are not necessarily hyperbolic.
I consider probabilistic approach for the topic. I show that for infinite pseudotrajectories it does not change the notion. At the same time it shows that relatively long pseudotrajectories can be shadowed by exact trajectory with high probability. The main technique is a reduction to special form of gambler's ruin problems and mild form of large deviation principle for random walks. We show that our approach works for several examples -- skew product maps.
The talk is based on joint works with G. Monakov

Whether oscillations in biological neuronal networks are merely a byproduct of neuronal interactions or serve computational purposes continues to be a topic of active discussion. Here, we report on how the inclusion of hallmark features of the cerebral cortex such as the presence of oscillatory units, heterogeneity, synaptic delays, and modularity into recurrent neural networks (RNNs) simulated in silico influences their performance on common pattern recognition tasks when trained with a gradient-based learning rule. We find that our RNNs composed of damped harmonic oscillators (DHOs) learn to desynchronize their activity to produce high-dimensional representations of stimuli, and by leveraging non-linear dynamical effects such as frequency dependent gain modulation form a computational substrate that vastly outperforms state of the art gated RNN architectures (GRU, LSTM) in learning speed, task performance, and noise resiliency. Analysis of the structure and dynamics of our networks provides a posteriori explanations for a number of physiological phenomena whose function so far has been elusive or has given rise to controversial discussions.

Theories of urban scaling have demonstrated remarkable predictive accuracy at aggregate levels. However, they have overlooked the stark inequalities that exist within cities. Human networking and productivity exhibit heavy-tailed distributions, with some individuals contributing disproportionately to city totals. Here we use micro-level data from Europe and the United States on interconnectivity, productivity and innovation in cities. We find that the tails of within-city distributions and their growth by city size account for 36–80% of previously reported scaling effects, and 56–87% of the variance in scaling between indicators of varying economic complexity. Providing explanatory depth to these findings, we identify a mechanism—city size-dependent cumulative advantage—that constitutes an important channel through which differences in the size of tails emerge.
Our findings demonstrate that urban scaling is in large part a story about inequality in cities, implying that the causal processes underlying the heavier tails in larger cities must be considered in explanations of urban scaling. This result also shows that agglomeration effects benefit urban elites the most, with the majority of city dwellers partially excluded from the socio-economic benefits of growing cities.

With more than 10 million monthly contributing users, Reddit is one of the largest and most influential social media platforms in the world. Understanding its dynamics at different scales is an important research challenge, not only in its own right but also in relation e.g. to the study of political polarization. Reddit has been steadily growing over the last few years, both in terms of content and userbase. For example: half of all threads and comments ever created date back to the last 2 years, despite Reddit being over 15 years old. I will therefore start by presenting a general overview of Reddit statistics, updating some figures from an earlier review work [1].Next, I will present some results on the modelling of discussions across various subreddits. It is known that the number of comments per thread Nc across the whole of Reddit follows approximately a power-law distribution [1]. We analysed the largest 500 subreddits in the years 2019-2022 using data from the Pushshift dataset [2] and we found that the distribution of Nc varies considerably across individual communities. In most subreddits Nc follows approximately a power-law distribution with an upper cut-off. Both the width and the exponent of the power-law window depend on the particular community. In other subreddits however a power-law fit appears inappropriate.In order to explain this variability we developed a preferential attachment model where the ability of a thread to attract comments is affected by its age and also by an intrinsic fitness. I will then conclude by discussing the phenomenology of the model, its limitations, and some possible extensions.[1] - Medvedev, A.N. et al. (2019) doi.org/10.1007/978-3-030-14683-2_9[2] - Baumgartner, J. et al. (2020) doi.org/10.1609/icwsm.v14i1.7347

This is a survey talk discussing the following objects. A point configuration, $P$, in Euclidean space and a height function give rise to polyhedral subdivision of the convex hull of $P$. Subdivisions which arise in this way are called regular (or coherent). These polyhedral complexes admit a dual, which is again a polyhedral complex. Special cases include the tight spans of finite metric spaces (studied by Bandelt and Dress) as well as all tropical linear spaces. In this way this talk also serves as a gentle introduction to tropical geometry through polyhedral geometry.

The notion of Ricci flat graphs was introduced in 1996 by Chung and Yau in connection to a logarithmic Harnack inequality. I will show that self-centred Ollivier-Ricci Bonnet-Myers graphs are Ricci-Flat and discuss the use of Prolog to obtain these results. I will then show the two notions of curvature added to the Graphing Calculator and prove a result for Steinerberger curvature on trees.

The effective resistance is a concept that originated in the analysis of electrical circuits, where it characterizes how easily an electrical current can flow between two points in the circuit. More abstractly, the effective resistance can be defined between pairs of vertices in a weighted graph. In the graph-theoretic context, the effective resistance has many interpretations: it is a metric between the vertices of a graph and it is related to random spanning trees and random walks. In this talk, I want to highlight two geometric aspects of effective resistances. First, I will discuss a result by Fiedler that describes a bijection between weighted graphs and hyperacute simplices (simplices with certain angular constraints). Second, I will describe some recent work together with Renaud Lambiotte on discrete curvature based on the effective resistance.

In this talk, I am going to introduce main tools of Geometric and Topological Data Analysis including persistence, merge trees and Reeb graphs. I will also present some of my work relating to stability, optimality and characterizations of these tools.

One challenge in the empirical validation of agent-based models (ABMs) is how to infer reliable insights from numerical simulations. Ergodicity (besides stationarity) is a precondition in any estimation-related task, however it has not been systematically explored and is often simply presumed. For non-ergodic observables it remains largely unclear how to deal with the associated uncertainty. Here we show how an understanding of (broken) ergodicity in the convergence of summary statistics (so-called moments) improves the validation and calibration of 15 ABMs. We take two prototype agent-based financial market models and run Monte Carlo experiments to study convergence behaviour of selected moments. We find that for most moments the convergence time it takes to reach asymptopia is infeasibly long, thus leaving us in a pre-asymptopic regime. Choosing an efficient mix of ensemble size and simulated time length can help guiding validation efforts through this jungle of uncertainty.

One challenge in the empirical validation of agent-based models (ABMs) is how to infer reliable insights from numerical simulations. Ergodicity (besides stationarity) is a precondition in any estimation-related task, however it has not been systematically explored and is often simply presumed. For non-ergodic observables it remains largely unclear how to deal with the associated uncertainty. Here we show how an understanding of (broken) ergodicity in the convergence of summary statistics (so-called moments) improves the validation and calibration of 15 ABMs. We take two prototype agent-based financial market models and run Monte Carlo experiments to study convergence behaviour of selected moments. We find that for most moments the convergence time it takes to reach asymptopia is infeasibly long, thus leaving us in a pre-asymptopic regime. Choosing an efficient mix of ensemble size and simulated time length can help guiding validation efforts through this jungle of uncertainty.

We study a generalization of the classical Multidimensional Scaling procedure to the setting of general metric measure spaces. We identify spectral properties of the generalized cMDS operator thus providing a natural and rigorous mathematical formulation of cMDS. Furthermore, we characterize the cMDS output of several continuous exemplar metric measures spaces. In particular, we characterize the cMDS output for spheres $\S^{d-1}$ (with geodesic distance) and subsets of Euclidean space. In particular, the case of spheres requires that we establish the its cMDS operator is trace class, a condition which is natural in context when the cMDS has infinite rank (such as in the case of spheres with geodesic distance). Finally, we establish the stability of the generalized cMDS process with respect to the Gromov-Wasserstein distance.

A fundamental question in neuroscience is how brain organisation gives rise to humans’ unique cognitive abilities. Although complex cognition is widely assumed to rely on frontal and parietal brain regions, the underlying mechanisms remain elusive: current approaches are unable to disentangle different forms of information processing in the brain. Here, we introduce a framework to identify synergistic and redundant contributions to neural information processing and cognition. Leveraging multimodal data including functional and diffusion MRI, PET, cytoarchitectonics and genetics, we reveal that synergistic interactions are the fundamental drivers of complex human cognition. Whereas redundant information dominates sensorimotor areas, synergistic activity is closely associated with the brain’s prefrontal-parietal and default networks; furthermore, meta-analytic results demonstrate a close relationship between high-level cognitive tasks and synergistic information. From an evolutionary perspective, the human brain exhibits higher prevalence of synergistic information than non-human primates. At the macroscale, we demonstrate that high-synergy regions underwent the highest degree of evolutionary cortical expansion. At the microscale, human-accelerated genes promote synergistic interactions by enhancing synaptic transmission. These convergent results provide critical insights that synergistic neural interactions underlie the evolution and functioning of humans’ sophisticated cognitive abilities, and demonstrate the power of our widely applicable information decomposition framework.

The Gromov-Wasserstein (GW) distance is a generalization of the standard Wasserstein distance between two probability measures on a given ambient metric space. The GW distance assumes that these two probability measures might live on *different* ambient metric spaces and therefore implements an actual comparison of pairs of metric measure spaces. A metric-measure space is a triple (X,dX,muX) where (X,dX) is a metric space and muX is a Borel probability measure over X.
In practical applications, this distance is estimated either directly via gradient based optimization approaches, or through the computation of lower bounds which arise from distributional invariants of metric-measure spaces.
One particular such invariant is the so-called ‘global distance distribution’ which precisely encodes the distribution of pairwise distances between points in a given metric measure space. This invariant has been used in many applications yet its classificatory power is not yet well understood.
This talk will overview the construction of the GW distance, the stability of distributional invariants, and will also discuss some results regarding the injectivity of the global distribution of distances for smooth planar curves, hypersurfaces, and metric trees.

Shifted symplectic structure should be the correct concept when one tries to put symplectic structures on stacks. These guys will pop out as reduced phase space of (AKSZ, or BV) sigma model when dimension goes higher. When we do this for the Chern-Simons sigma model, we end up with $BG$. In this talk, we explore various differential-geometric (1-group, 2-group, double-group if time allows) models to realise this (2-shift) symplectic structure in concrete formulas and show the equivalences between them.
In the infinite dimensional models (2-group, double-group), Segal's symplectic form on based loop groups turns out to be additionally multiplicative or almost so. These models are equivalent to a finite dimensional model with Cartan 3-form and Karshon-Weinstein 2-form via Morita Equivalence. All these forms give rise to the first Pontryagin class on $BG$. Moreover, they are related to the original invariant pairing on the Lie algebra through an explicit integration and Van Est procedure. It's a joint work with Miquel Cueca Ten.

In this paper we present a general framework in which one can rigorously study the effect of spatio-temporal noise on traveling waves, stationary patterns and oscillations that are invariant under the action of a finite-dimensional set of continuous isometries (such as translation or rotation). This formalism can accommodate patterns, waves and oscillations in reactiondiffusion systems and neural field equations. To do this, we define the phase by precisely projecting the infinite-dimensional system onto the manifold of isometries. Two differing types of stochastic phase dynamics are defined: (i) a variational phase, obtained by insisting that the difference between the projection and the original solution is orthogonal to the nondecaying eigenmodes, and (ii) an isochronal phase, defined as the limiting point on manifold obtained by taking t → ∞ in the absence of noise. We outline precise stochastic differential equations for both types of phase. The variational phase SDE is then used to show that the probability of the system leaving the attracting basin of the manifold after an exponentially long period of time (in ǫ −2 , the magnitude of the noise) is exponentially unlikely. In the case that the manifold is periodic (such as for spiral waves, spatially-distributed oscillations, or neural-field phenomena on a compact domain), the isochronal phase SDE is used to determine asymptotic limits for the average occupation times of the phase as it wanders in the basin of attraction of the manifold over very long times. In particular, we find that frequently the correlation structure of the noise biases the wandering in a particular direction, such that the noise induces a slow oscillation that would not be present in the absence of noise.

The cortical networks that underlie behavior exhibit an orderly functional organization at local and global scales, which is readily evident in the visual cortex of carnivores and primates. Here, neighboring columns of neurons represent the full range of stimulus orientations and contribute to distributed networks spanning several millimeters. However, the principles governing functional interactions that bridge this fine-scale functional architecture and distant network elements are unclear, and the emergence of these network interactions during development remains unexplored.
Here, by imaging spontaneous activity patterns in mature ferret visual cortex, we find widespread and specific modular correlation patterns that accurately predict the local structure of visually-evoked orientation columns from the spontaneous activity of neurons that lie several millimeters away. The large-scale networks revealed by correlated spontaneous activity show abrupt ‘fractures’ in continuity that are in tight register with evoked orientation pinwheels. Chronic in vivo imaging demonstrates that these large-scale modular correlation patterns and fractures are already present at early stages of cortical development and predictive of the mature network structure. Silencing feed-forward drive through either retinal or thalamic blockade does not affect network structure suggesting a cortical origin for this large-scale correlated activity, despite the immaturity of long-range horizontal network connections in the early cortex. Using a circuit model containing only local connections, we demonstrate that such a circuit is sufficient to generate large-scale correlated activity, while also producing other key features of spontaneous activity, all in close agreement with our empirical data. These results demonstrate the precise local and global organization of cortical networks revealed through correlated spontaneous activity and suggest that local connections in early cortical circuits generate structured long-range network correlations that underlie the subsequent formation of visually-evoked distributed functional networks.

Brains process information through the collective dynamics of large neural networks. Collective chaos was suggested to underlie the complex ongoing dynamics observed in cerebral cortical circuits and determine the impact and processing of incoming information streams. In dissipative systems, chaotic dynamics takes place on a subset of phase space of reduced dimensionality and is organized by a complex tangle of stable, neutral and unstable manifolds. Key topological invariants of this phase space structure such as attractor dimension, and Kolmogorov-Sinai entropy so far remained elusive.
Here we calculate the complete Lyapunov spectrum of recurrent neural networks. We show that chaos in these networks is extensive with a size-invariant Lyapunov spectrum and characterized by attractor dimensions much smaller than the number of phase space dimensions. We find that near the onset of chaos, for very intense chaos, and discrete-time dynamics, random matrix theory provides analytical approximations to the full Lyapunov spectrum. We show that a generalized time-reversal symmetry of the network dynamics induces a point-symmetry of the Lyapunov spectrum reminiscent of the symplectic structure of chaotic Hamiltonian systems. Fluctuating input reduces both the entropy rate and the attractor dimension. For trained recurrent networks, we find that Lyapunov spectrum analysis provides a quantification of error propagation and stability achieved. Our methods apply to systems of arbitrary connectivity, and we describe a comprehensive set of controls for the accuracy and convergence of Lyapunov exponents.
Our results open a novel avenue for characterizing the complex dynamics of recurrent neural networks and the geometry of the corresponding chaotic attractors. They also highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks.

We investigate the stability of traveling-pulse solutions to thestochastic FitzHughNagumo equations with additive noise. Special attention is given to the effect of small noise on the classical deterministically stable traveling pulse. Our method is based on adapting the velocity of the traveling wave by solving a stochastic ordinary differential equation (SODE) and tracking perturbations to the wave meeting a stochastic partial differential equation (SPDE) coupled to an ordinary differential equation (ODE). This approach has been employed by Krüger and Stannat for scalar stochastic bistable reaction-diffusion equations such as the Nagumo equation.
A main difference in our situation of an SPDE coupled to an ODE is that the linearization around the traveling wave is not self-adjoint anymore, so that fluctuations around the wave cannot be expected to be orthogonal in a corresponding inner product. We demonstrate that this problem can be overcome by making use of Riesz instead of orthogonal spectral projections. We expect that our approach can also be applied to traveling waves and other patterns in more general situations such as systems of SPDEs that are not self-adjoint. This provides a major generalization as these systems are prevalent in many applications.
This is joint work with Manuel Gnann and Christian Kuehn.

Given some collection C of n>>0 data-points, one often is interested not in the particular representation od C, but rather in some intrinsic relations between points in C. An example of such relations could be a distance function on C. Recording such data would require O(n^2)$ numeric entries. For, let's say, n~10^6 this is unrealistic. If the distance function on C comes from the restriction of the Euclidean distance under some embedding of C in R^N, and one allows some multiplicative error in the distances, Johnson-Lindenstrauß Lemma allows to reduce the dimension of the representation space to O(n . ln n) (which is manageable for n~10^6). I would like to investigate, whether there are other classes of finite metric spaces, beside Euclidean-embeddable, where similar reduction is possible. One candidate would be delta-hyperbolic spaces. I will use Gromov's Macroscopic Dimension notion to formulate the problem and to reduce it to the study of the injective hulls of metric spaces.

I will discuss some aspects of the geometry of the space of quantum states in finite dimensions (which means I will mainly focus on good ol' matrices).
After a brief introduction, I will try to exhibit (what I find to be) a curious link that exists between some of the most used Riemannian metric tensors in quantum information geometry (e.g., the Bures metric tensor, the Wigner-Yanase metric tensor, and the Bogoliubov-Kubo-Mori metric tensor) and some group actions on the space of quantum states which are associated with suitable extensions of the unitary group.

The description complexity (a.k.a. Kolmogorov complexity) of a string, C(x), is defined as the length of the shortest program that prints this string x. Given a pair of strings x and y, we define the mutual information between them I(x:y) as C(x) + C(y) - C(x,y). Intuitively, the mutual information is a measure of correlation between two strings: the closer the correlation is, the bigger the mutual information. This notion has a clean formal definition, but it lacks a more explicit interpretation or a ``physical'' meaning: in the general case, we cannot find a string z of complexity I(x:y) that could be interpreted as a common part of x and y.
However, it turns out that the mutual information can be interpreted in terms of a communication protocol. It is a communication protocol with two parties, one having x and the other one having y, with interaction on a public channel. The aim of the protocol is to establish the longest shared secret key without revealing any information on this key to the eavesdropper. It turns out that for every pair of inputs x,y the optimal size of the key is equal to the mutual information between x and y.
In the talk we will discuss the context around this question (the invariance theorem for Kolmogorov complexity, the chain rule) and explain how the question about the mutual information can be translated in the language of combinatorial properties of graphs. Then, as long as time allows, we sketch the technical ideas behind the proof (randomness extractors, information inequalities, spectral bounds for graphs with good mixing properties).
The talk is based on joint works with Marius Zimand and Emirhan Gürpinar.

Most models of opinion dynamics assume that individuals constantly exchange opinions and adjust their opinion along the way according to some rule or mechanism. Such approaches neglect that people can also choose to not express their opinion. We introduce a model of public opinion expression in which the feedback agents receive influences whether they voice their opinion publicly or not. We carry out an analysis both from a game-theoretic and a dynamical systems perspective.

In linear dynamical systems, one has an elegant way to study dynamics and control using the state space formulation. However, in non-linear systems, there is no dynamics matrix to begin with. In this talk, we discuss how a system of non-linear ODEs can be dimensionally unfolded such that non-linear terms in the vector field can be re-expressed using auxiliary dynamical variables. This results in an unfolded dynamical system with only polynomial non-linearities. It turns out that this works for a large class of non-linear ODEs. For these, there exists an equivalent unfolded dynamical system with only polynomial non-linearities. Expressing generic non-linear vector fields using polynomials has some advantages. Methods of algebraic geometry can now be used to find attractors by organizing the system in Gröbner basis. More importantly, once we have a polynomial vector field, the system can straightforwardly be expressed in state space form, yielding a state-dependent dynamics matrix. Additionally, we will discuss a graphical representation of this matrix as a network with non-linear edge weights, which generalizes the usual notion of networks with dyadic edges to networks with compounded edges. A non-linear state space formulation of dynamical systems suggests a way to formalize stability and controllability criteria of non-linear dynamical systems. More specifically, with this we derive a non-linear generalization of the Kalman controllability matrix. Another consequence of expressing systems in generalized state space form is that this admits an eigenvalue flow in the spectrum of the system. This suggests a way to perform eigenvalue assignments for non-linear control. We demonstrate these applications for simple systems.

In my talk, I will give an overview of recent results obtained aimed at providing more elements of dynamical systems theory for SPDEs. This includes results on existence, regularity, attractors, sample path estimates, center manifolds, bifurcations, and travelling waves. Then I will go through one proof in more detail showing, how to construct center manifolds for rough differential equations.

We define a new Cheeger-like constant for graphs that bounds the largest eigenvalue of the normalized Laplace operator. This is a joint work with Jürgen Jost.

We consider the graphs over a bounded strictly convex domain $\Omega$ in $\mathbb{R}^n$ with prescribed variable contact angle with the boundary cylinder $\partial\Omega\times \mathbb{R}$, which move by nonparametric mean curvature flow. When the contact angle is nearly perpendicular, we show that the solutions converge to ones which move by translation. Subsequently, the existence and uniqueness of smooth solutions to the capillary problem on the strictly convex domain are also discussed. As for the hypersurface evolving along with the mean curvature flow in Riemannian manifold endowed with a Killing vector field, similar results are also obtained. Lastly, if time permits, we introduce a new mean curvature type flow with capillary boundary in the unit ball, which preserves the volume of the bounded domain enclosed by the hypersurface, and monotonically decreases the energy functional. We show that it has longtime existence and converges to the spherical cap.

In an attempt to capture both the genealogical and ecological factors in one crude approximation, one can consider a class of population models described in terms of countable particle systems, in which every particle is equipped with a type and a level. Typically the type encodes the spatial position and genetic type of the particle. The evolution of levels encodes the genealogy. We shall discuss the construction of such population models, building on an example of the Feller branching process.
As an application, we shall discuss the following. It is well know that the dynamics of subpopulation of a rare type in Wright-Fisher model is governed by Feller branching process. We will sketch the proof an analogous result for the spatially distributed population evolving according to spatial Lambda-Fleming-Viot model in random environment. The limiting process is the superBrownian motion in random environment. Joint with Jonathan Chetwynd-Diggle.

In this seminar, I will discuss the general idea of the recent research programme coined Ergodicity Economics (EE). The seminar is shaped to provide the basis for future collaborations, thus I will briefly highlight some theoretical and experimental results and conclude with current research questions.
Ergodic theory studies the behaviour of averages and arose from the ergodicity problem in the foundations of statistical mechanics. Ergodicity Economics studies the ergodicity problem in the context of economics, more specifically the conditions under which ensemble averages coincide with time averages. As it turns out, ergodicity is a foundational issue especially in the context of decision making under uncertainty. Although the evaluation of gambles is at the basis of formal economics, the community is mostly unfamiliar with the concept of ergodicity. This leaves ample opportunities for joint future research projects.
Embedding economics within historical time is a long-desired desideratum. Growth rate maximisation -- as the main novelty associated with EE -- constitutes not only a mere different rationality criterion, it rather uses physically meaningful observables (time-averaged quantities) to coherently describe economic processes.

We develop a general mathematical framework of stock prices based on rough path theory, a recent important extension of the classical Ito calculus. Thus, we propose a stock price model driven by a Hoelder continuous noise, understood in the sense of a rough differential equation. The no-arbitrage principle is then satisfied under the assumption of transaction costs as long as the driving noise is a sticky process.
This model offers the possibility of additional noises hidden in the signatures of rough paths, hence supporting the idea of mixture of a standard Brownian noise and another source of long memory noise, for instance a fractional Brownian motion. It also implies a nonlinear relation between the expectation and the variance of the logarithmic return. We show numerical evidence from stock indices and discuss the potential risk of model uncertainty where the ambiguity comes from the signatures of rough paths.
This is joint work with Jürgen Jost.

Unlike the Hodgkin-Huxley picture in which the nerve impulse results from ion exchanges across the cell membrane through ion-gate channels, in the so-called soliton model proposed by Heimburg and Jackson, the impulse is seen as an electromechanical process related to thermodynamical phenomena accompanying the generation of the action potential. In the present work, an improved soliton model for biomembranes and nerves is used to establish that in a low-amplitude approximation, the dynamics of nerve impulses can be described by the damped nonlinear Schrödinger equation (DNLSE) that is shown to admit soliton trains. This solution contains an undershoot beneath the baseline (“hyperpolarization”) and a “refractory period,” i.e., a minimum distance between pulses, and therefore it represents typical nerve profiles. Likewise, the linear stability of wave trains is analyzed. The results from the linear stability analysis show that, in addition to the main periodic wave trains observed in most nerve experiments, five other localized background modes can copropagate along the nerve. These modes could eventually be responsible for various fundamental processes in the nerve such as, phase transitions, electrical and mechanical changes.

The Bayesian approach to inverse problems is often valued for its regularizing properties and a way in which quantification of uncertainty is built-in directly into this paradigm. In recent decades, fueled by the increasing availability of the computing power, it has seen a rapid increase in popularity and an explosion of new applications.
The main point of this talk is to show how this framework can be used to estimate unknown parameters of a stochastic differential equation given discrete-time observations of the underlying process---a setup often encountered in neuroscience, physics, population dynamics and many other sciences. One of the main difficulties afflicting successful employment of the Bayesian approach to solving such problems is the necessity to simulate diffusion processes conditioned on their end-points---a well-studied problem that nonetheless does not have a universally robust and efficient solution. In this talk I will present some of the recent advances in simulation of conditioned diffusions and discuss some of the algorithmic difficulties in trying to extend those methodologies to more elaborate processes.

Eigenvalues of the Laplace operator capture important geometric features of the underlying Riemannian manifold, or a graph or whatever space on which one can fancy to define the operator.
Some other geometrically important invariants, such as Cheeger constant, systoles, Gromov's/Guth's widths appear as "eigenvalues" of some non-linear operators, sometimes defined on non-linear functional spaces.
I will discuss some ways to make rigorous definition of a spectrum of non-linear operator and natural questions (such as stability of the spectrum) that appear along the way.

Aleksey Tikhonov, a Senior Analyst from Yandex, will give a short retrospective of the last years' development in NLP, typical difficulties and problem statements. We will start with classic RNN architectures and continue up to the newest approaches like BERT, ELMo, GPT-2. At the discussion part, we will talk about the latest trends and challenges. Basic neural networks approach understanding is required.

Human well-being is affected by exposure to chemicals in environment. Endocrine disrupting chemicals (EDCs) are one such group of emerging concern that have the ability to perturb the normal functioning of human beings. In order to screen EDCs in our daily life, we have developed a detailed workflow which was employed to process ~16000 scientific abstracts to identify more than 650 EDCs with supporting experimental evidence. In this talk, I will present results from the network-based exploration of the chemical space of EDCs which will highlight the challenges in separating EDCs from safe chemicals in our environment.
Ref: B.S. Karthikeyan et al, Science of the Total Environment 692, 281-296 (2019).

Having an upper or lower bound for the sectional curvature is equivalent to some metric properties which are geometrically meaningful even in geodesic length spaces. This led to the synthetic theory of curvature bounds in geodesic length spaces or more precisely the Alexandrov and Busemann definitions of curvature bounds. These extensions to metric geometry still require that any two points can be connected by a shortest geodesic.
In our current project, on which this talk is based, we introduce a notion of curvature inequalities that works with intersection patterns of distance balls and therefore is meaningful even for discrete metric spaces.
Such intersection patterns have already been investigated from different perspectives in persistent homology method in topological data analysis which (using Čech homology groups) records how such intersection patterns change when the radii of those distance balls increase.
Unlike previous developments (where the extreme space is Euclidean plane), extremes of our classifications are tripod spaces including hyperbconvex spaces (which have trivial Čech homology groups). So we list some properties of hyperconvex spaces and some topological results in tripod spaces as well.

I will start with a short overview of the state-of-the-art methods for natural language generation (NLG). Then I want to discuss expert-based and ad-hoc metrics for different aspects of texts. Finally, I will discuss current experiments with controlled NLG, when the algorithm tries to change one aspect of the text and leave others intact.

Recent evolutionary bet-hedging models under fluctuating environments and environmental cues have adhered more or less to the classic growth-optimal strategy framework. In this talk, we outline a proposal for extensive departures from the standard framework to better account for evolutionary trajectories and fitness maximization under stochastic growth. Crucially, we incorporate considerations of volatility, motivated by accounting for interim extinction risk in finite populations, in conjunction with a shifting perspective from hypothetically infinite to finite-time evolutionary horizons.

Reinforcement learning in multi-agent systems has been studied in the fields of economic game theory, artificial intelligence and statistical physics by developing an analytical understanding of the learning dynamics (often in relation to the replicator dynamics of evolutionary game theory). However, the majority of these analytical studies focuses on repeated normal form games, which only have a single environmental state. Environmental dynamics, i.e. changes in the state of an environment affecting the agents' payoffs has received less attention, lacking a universal method to obtain deterministic equations from established multi-state reinforcement learning algorithms.
In this work we present a novel methodological extension, separating the interaction from the adaptation time scale, to derive the deterministic limit of a general class of reinforcement learning algorithms, called temporal difference learning. This form of learning is equipped to function in more realistic multi-state environments by using the estimated value of future environmental states to adapt the agent's behavior. We demonstrate the potential of our method with the three well established learning algorithms Q learning, SARSA learning and Actor-Critic learning. Illustrations of their dynamics on two multi-agent, multi-state environments reveal a wide range of different dynamical regimes, such as convergence to fixed points, limit cycles and even deterministic chaos.

The theory of nonautonomous dynamical systems has undergone major development during the past 19 years since I talked about attractors of nonautonomous difference equations at ICDEA Poznan in 1998.
Two types of attractors consisting of invariant families of sets have been defined for nonautonomous difference equations, one using pullback convergence with information about the system in the past and the other using forward convergence with information about the system in the future. In both cases, the component sets are constructed using a pullback argument within a positively invariant family of sets. The forward attractor so constructed also uses information about the past, which is very restrictive and not essential for determining future behaviour.
The forward asymptotic behaviour can also be described through the omega-limit set of the system.This set is closely related to what Vishik called the uniform attractor although it need not be invariant. It is shown to be asymptotically positively invariant and also, provided a future uniformity condition holds, also asymptotically positively invariant. Hence this omega-limit set provides useful information about the behaviour in current time during the approach to the future limit.
References[1] P. E. Kloeden, T. Lorenz, The construction of non-autonomous forward attractors, Proc. Amer. Mat. Soc. 144 (2016), no. 1, 259-268.[2] P. E. Kloeden, Meihua Yang, Forward attraction in nonautonomous dierence equations, J. Difference Equ. Appl. 22 (2016), 513-525.[3] P. E. Kloeden, Asymptotic invariance and the discretisation of nonautonomous forward attracting sets, J. Comput. Dynamics, 3 2016), 179-189.

According to the Buddhist doctrine all things have no essence, but only shape. In this talk I will discuss the shape of the entropic cone. Entropic cone is the closure of the set of values of entropies of n finitely-valued random variables and their joints. For n=0,1,2,3 the entropic cone is easy to evaluate. Starting with n=4 things become more complicated and interesting. I will discuss what is (un)known and present some new results.

Processing of natural stimuli in sensory systems has been traditionally studied within two theoretical frameworks: probabilistic inference and efficient coding. Probabilistic inference specifies optimal strategies for learning about relevant properties of the environment from local and ambiguous sensory signals. Efficient coding provides a normative approach to study encoding of natural stimuli in resource-constrained sensory systems. By emphasizing different aspects of information processing they provide complementary approaches to study sensory computations. In this work we attempt to bring them together by developing general principles that underlie the tradeoff between energetic cost of sensory coding and accuracy of perceptual inferences. We then derive adaptive encoding schemes that dynamically navigate this tradeoff. These optimal encodings tend to increase the fidelity of the neural representation following a change in the stimulus distribution, and reduce fidelity for stimuli that originate from a known distribution. We predict dynamical signatures of such encoding schemes and demonstrate how phenomena well known in neurobiology, such as burst coding and firing rate adaptation, can be understood as hallmarks of optimal coding for accurate inference.

In this talk, I introduce a space (topos) related to random experiments and an adapted cohomology theory. I describe the cohomological characterization of Shannon entropy, Tsallis entropy and the Fontené-Ward multinomial coefficients (a generalized version of the usual ones), as well as a functorial relation between them. Finally, I comment about a generalization of Shannon theory in which messages are finite vector spaces.

We provide a generalization of Ollivier curvature to weighted graphs with potentially unbounded Laplacian. An immediate consequence of this is a discrete version of the Laplacian comparison principle. We give a continuous time heat semigroup characterization of lower bounds on the Ollivier curvature via the perpetual cutoff method. Using this, we show that a lower bound on the Ollivier curvature implies stochastic completeness.

The spectral dimension, a positive real number related to the probability a random walk on a network will eventually return to where it started, is often finite in geometric networks e.g. the k-nearest neighbour graph on uniformly random points on the torus, so the appearance of finite spectral dimension in a growing network model is often considered to be a “geometric” property. So is the appearance of a non-trivial distribution of node “curvatures” (related to the incidence of triangles/simplicial complexes at a point, see e.g. the recent work of J. Jost, as well as M. Gromov and O. Knill), as well as a non-trivial community structure, much higher clustering than that of random networks, and the famous six degrees of separation i.e. "small world" property. A further example introduced recently is the random topology of the network’s clique complex, where we build a topological space by face-including (gluing together at edges/faces) the many-body interactions e.g. triangles, 4-cliques etc in a network data set, then compare its homology to those of random geometric complexes like the Vietoris-Rips or the Čech complex, introduced by e.g. Linial, Meshulam, Farber, Bianconi and Kahle. We thus introduce and discuss recent progress made on determining to what extent these properties emerge in models of complex networks.

Geometric singular perturbation theory (GSPT) founded by Fenichel has been successfully used in many areas of mathematical biology, e.g. mathematical neuroscience and calcium signaling. However, slow-fast analysis and GSPT of mathematical models arising in cell biology is much less established. The main reason seems to be that the corresponding models typically do not have an obvious slow-fast structure of the standard form. Nevertheless, many of these models exhibit some form of hidden slow-fast dynamics, which can be utilized in the analysis. In this talk I will explain some of the main concepts of GSPT in the context of a non-trivial application. I will present a geometric analysis of a novel type of relaxation oscillations involving two different switches in a model for the NF − $\kappa$B signaling pathway.

The analysis of a network usually focuses on its structural properties, with network statistics being derived from node and edge distributions alone. But networks also include a dynamic component, and in some fields -- such as biochemistry - the most interesting properties of a network are associated with its dynamics. In this talk I will describe a theory developed for the analysis of biochemical networks, Metabolic Control Analysis (MCA), which attempts to quantitatively identify the "important" parameters of systems of enzyme catalyzed reactions. I will attempt to generalize this theory, and compare it with others being developed and used within the group to study networks.

Classical results in differential geometry such as the Lichnerowicz and Bonnet-Myers theorems or isoperimetric estimates relate the Ricci curvature of a manifold to its analytic and topological properties. Originally, those estimates rely on sharp Ricci curvature lower bounds, and during the last years they have been generalized to integral curvature bounds. This talk will consider even more general Ricci curvature assumptions implying generalizations of the classical estimates. Namely, we show that, in a certain sense, relative boundedness of the Ricci curvature suffices to prove a Lichnerowicz and Bonnet-Myers type theorem. If time allows, we will also discuss isoperimetric estimates based on Kato-type assumptions.

Synaptic time-dependent plasticity (STDP) is a biological mechanism which changes the strength of the connections between two neurons depending on the timing of the spikes in the pre- and postsynaptic neurons. Many studies relate STDP to the development of input selectivity and temporal coding, but time and energy efficiency is usually not studied. This work focuses on the property of STDP to reduce latencies, which has only been briefly addressed (Song, Miller & Abbot, Nat. Neuroscience 2001). As a trivial example, suppose three presynaptic neurons consistently trigger a postsynaptic spike; by STDP, their strengths increase to the point where only two synapses are necessary. Since the first two synapses always arrive before the third, the postsynaptic spike is triggered earlier. We extend this notion to populations of neurons and to account for inhibitory plasticity. Our work relates the system-level goal of speeding computation to a mechanistic, neuron level rule.

Toposes are special kinds of categories. Because of a device called "internal language", they can be regarded as alternate mathematical universes in which not the usual laws of logic hold. Various subjects such as differential geometry, algebraic geometry, homotopy theory, commutative algebra, measure theory and computability theory provide sources for such toposes, where they yield a way to view the familiar objects of study from a new point of view. The talk gives an introduction to this circle of ideas and surveys some of their applications, focusing on reflection principles and local-to-global principles in geometry and new reduction techniques in commutative algebra. No prior knowledge of category theory, topos theory or formal logic is supposed.

Sherlock Holmes and doctor Watson take a ride on an air-balloon. After a sudden gust of wind takes them in an unknown direction, they spot a man on the ground and inquire about their location. After a short moment of consideration, the man answers: "You are on the air balloon". "This man is a mathematician!" -- concludes Sherlock, while wind carries the balloon further. "But how do you know?" -- wonders Dr. Watson. "For three reasons, dear Doctor. First, he thought before answering, second, his answer is absolutely correct and, finally, his answer is absolutely useless."
While we, with J. Portegies, were developing the theory of tropical probability spaces, the so fitting description of mathematical work given by Sherlock was rather frustrating, because the value of any theory is in its applications outside of itself.
Now that we have developed fairly sophisticated tools and are learning how to use them, the first fruits, though small and green, start appearing.
I will introduce the toolbox of tropical probability spaces and will show how it can be used to deduce a non-Shannon inequality for entropies of four random variables and their joints.

I will discuss the concept of affine representation for topological dynamical systems. This leads naturally to the study of dynamics induced onto the space of probability measures. Some qualitative relationships between a dynamical system and its lifting to the probability space are shown. I will also give a differentiable approach to dynamics on the space of probability measures. This structure implies that notions from differentiable dynamics may be carried over to the representation of a system that has no differentiable structure itself.

We will generalize the normalized combinatorial Laplace operator for graphs by defining two Laplace operators for hypergraphs that can be useful in the study of chemical reaction networks. We will also investigate some properties of their spectra.
Joint work with J. Jost.

In this seminar, I am going to talk mostly about Ollivier (Coarse) Ricci curvature in metric measure spaces and specifically graphs. Before coming to that point, I describe some intuitions behind this notion which comes from ricci curvature in Riemannian manifolds. Although as one of the discrete generalizations of ricci curvature , Ollivier type keeps fewer properties of Riemannian manifolds , it is somewhat simpler to present and has wide range of examples. This curvature, as an edge-based measure, is a part of relatively new approach for analyzing networks and can encode some important properties of the network.
This seminar is a requisitory introduction of an ongoing project under the supervision of Professor Jost.

Networks and graphs are often studied using the eigenvalues of their adjacency matrices, an powerful mathematical tool with applications on fields as diverse as systems engineering, ecology, machine learning and neuroscience. As in those applications the exact graph structure is not known, we usually resort to random graphs to obtain properties of eigenvalues from known structural features. However, this theory is not intuitive and only few results are known. In this paper we tackle this problem by studying how cycles in the graph relate to eigenvalues. We start by deriving a simple relation between eigenvalues and cycle weights, and use it to study two structural features: The spectral radius of circulant graphs and the eigenvalue distribution of random graphs with motifs. During this study we empirically uncover to surprising phenomena. First, circulant directed networks have eigenvalues distributed in concentric circles around the origin. And second, the eigenvalues of a network with abundance of short cycles are confined to the interior of a k-ellipse, where k is the length of the cycles. Our approach offers an intuitive way to study eigenvalues on graphs and in the process reveals surprising connections between random matrix theory and classical planar geometry.

We will generalize the normalized combinatorial Laplace operator for graphs by defining two Laplace operators for hypergraphs that can be useful in the study of chemical reaction networks. We will also investigate some properties of their spectra. Joint work with J. Jost.

There have been many attempts at applying ideas and methods from Information Theory in the biosciences, ever since Shannon first laid out the his theory of communication in engineering, to most recent times in an accelerated swing. Much of this has concerned modeling bio-communication, evolutionary processes in uncertain conditions, learning mechanisms in organisms and new approaches of analysis of high-throughput genetic data. This talk will be an informal review of both the very early reactions by biologists and some of the more recent interesting and sometimes controversial claims and results.

Dendrite morphology is known to play an important role for neural computation and cells of the same class typically express characteristic properties in their dendritic structures, despite the fact that no two dendrites are exactly the same. In an attempt to capture those properties, a considerable number of different statistics for dendritic tree structures have been proposed over the last decades. Yet, their statistical power, e.g. for clustering neurons into well-known cell classes remained limited. Using a large data set of reconstructed dendritic trees from different species and brain regions, we give an overview of the commonly considered statistics and show that many of them are highly correlated, explaining their weak power in classification tasks. We furthermore devise simple maximum entropy null-models based on optimization principles already postulated by Ramon y Cajal that are able to explain in most cases both the observed distributions and pairwise correlations of common branching statistics. We conclude by presenting a number of novel statistics based on tree structure via centripetal branch orderings (Strahler numbers) and discuss what these
observations could mean for the configuration space occupied by dendritic trees.
This is joint work with Hermann Cuntz.

The issue of the volume of sets of states in the phase space framework can be addressed in order to distinguish classical from quantum states as well as to find separable states within all quantum states. In finite dimensional systems, several metrics are introduced in order to compute the volume of physical states. However, when going to infinite dimensional systems, problems arise also from the non-compactness of the support of states. Thus, on the one hand, we have the difficulties in analysing infinite dimensional systems, while on the other hand we still lack a unifying approach for evaluating volumes of classical and quantum states.To deal with these problems, we propose to exploit information geometry. In so doing, we associate a Riemannian manifold to a generic Gaussian system. Then, we consider a volume measure as the volume of the manifold associated with a set of states of the system. We are able to overcome the difficulty of an unbounded volume by introducing a regularizing function stemming from energy bounds, which acts as a form of compactification of the support of Gaussian states. We then proceed to consider a different regularizing function which satisfies some nice properties of canonical invariance. Finally, we find the volumes of classical, quantum, and quantum entangled states for two-mode Gaussian systems, showing chains of strict inclusions.This approach is extended to two-qubit systems by resorting to the Husimi $Q$-function that is a truly probability distribution function. Above all we address the question of whether such an approach gives results similar to other approaches based on quantum version of the Fisher metric, as Helstrom quantum Fisher metric and as Wigner-Yanase-like quantum Fisher metric. We focus on states having maximally disordered subsystems and analyze the behavior of the volume of sub-manifolds of separable and entangled states with fixed purity. We show that the all above mentioned approaches give the same qualitative results.

The habit does not make the monk....the algebraic dress of quantum mechanics hides a beautiful geometrical lingerie that I will try to uncover during the talk. In this context, I will briefly outline how we may think of the space of quantum states S as being a non-commutative version of classical probability theory, that is, how to look at quantum states as non-commutative versions of probability distributions. Then, we will see how the complex general linear group GL(n, C) and the unitary group U(n) act on S partitioning it into the disjoint union of orbits, and we will discover the beautiful and highly rich geometry of the manifolds of isospectral quantum states - the oribits of U(n) - using it as a point of departure in order to look for geometrical structures on the manifold of invertible quantum states - the orbit of GL(n,C) which is the primary object of quantum information theory. These geometrical structures will be families of quantum metric tensors satisfying the monotonicity property, and we will see how it is possible to extract these families from families of quantum relative entropies satisfying the data processing inequality. The explicit example of the (huge) two-parameter family of quantum relative entropies known as α-z-Rényi-relative will be fully worked out. A covariant, coordinate-free, geometrical formalism will be the background spacetime in which we will move.

Given a set of predictor variables and a response variable, how much information do the predictors have about the response, and how is this information distributed between unique, complementary, and shared components? Recent work has proposed to quantify the unique component of the decomposition as the minimum value of the conditional mutual information over a constrained set of information channels. We present an efficient iterative divergence minimization algorithm to solve this optimization problem with convergence guarantees, and we evaluate its performance against other techniques.
Joint work with Johannes Rauh and Guido Montúfar (https://arxiv.org/abs/1709.07487)

Given a set of predictor variables and a response variable, how much information do the predictors have about the response, and how is this information distributed between unique, complementary, and shared components? Recent work has proposed to quantify the unique component of the decomposition as the minimum value of the conditional mutual information over a constrained set of information channels. We present an efficient iterative divergence minimization algorithm to solve this optimization problem with convergence guarantees, and we evaluate its performance against other techniques.

I will sketch how analogous structures involving resource efficiency come up in various contexts, including chemistry, information theory, thermodynamics, and the mixing of paint. The general mathematical theory behind this is the theory of ordered commutative monoids. Among the main tools provided by this theory is a characterization of asymptotic efficiency in terms of monotone functionals, with a potential strengthening to monotone semiring homomorphisms via a suitable Positivstellensatz from real algebraic geometry. I will explain inner-mathematical applications to asymptotic aspects of graph theory, representation theory, and majorization theory.

The chemical community devotes most of its efforts to synthetic chemistry, therefore knowledge about reactants, catalysts, solvents and several other related aspects of chemical reactions is of important relevance. Part of this knowledge is its history that involves determining the aspects that have shaped chemical reactions to their current state; which are tasks for the history of chemistry. However, analysing the chemical reactions that have been reported in the scientific literature is not any more a subject of the conventional history of chemistry, for the number of substances and reactions grows exponentially. Here we show that a computational approach to the history of chemical reactions sheds light on the patterns behind the development and use of substances and reaction conditions. We explored the more than 45 million reactions gathered in Reaxys database by modelling them as a network through a bipartite hypergraph model of a chemical reaction. We came across with historical patterns for substances, types of substances, catalysts, solvents, temperatures and pressures of those reactions. It is found that chemists have traditionally used few reactants to produce many different substances. In such synthesis more combinations of about four chemical elements are explored. Despite the exponential growth of substances and reactions, little variation of catalysts, solvents, and reactants is observed throughout time. Regarding reaction conditions, the vast majority of reactions fall into a narrow domain of temperature and pressure, namely normal conditions. We also found and quantified the effect of world wars (WWs) upon chemical novelty during war periods. WW1 took production of new substances and reactions back around 30 years and WW2 around 15. We anticipate this study and especially its methodological approach to be the starting point for the computational history of chemical reactivity, where social and economical contexts are integrated.
Joint work: Eugenio J. Llanos, Wilmer Leal, Guillermo Restrepo, Peter Stadler

The realizability problem for tropical abelian differentials can be stated as follows: Given a pair $(\Gamma, D)$ consisting of a stable tropical curve $\Gamma$ and a divisor $D$ in the canonical linear system on $\Gamma$, we give a purely combinatorial condition to decide whether there is a smooth curve realizing $\Gamma$ together with a canonical divisor that specializes to $D$. In this talk I am going to introduce the basic notions needed to understand this problem and outline a comprehensive solution based on recent work of Bainbridge-Chen-Gendron-Grushevsky-Möller on compactifcations of strata of abelian differentials. Along the way, I will also develop a moduli-theoretic framework to understand the specialization of divisors to tropical curves as a natural tropicalization map in the sense of Abramovich-Caporaso-Payne. This talk is based on joint work with Bo Lin, as well as on an ongoing project with Martin Möller and Annette Werner.

One way to describe a complex projective variety X is to give equations cutting out X from an ambient projective space. At times however, there is a more intrinsic way of describing the complex structure of X if we know the underlying topological manifold. The additional data are called the periods of X and may be viewed as integrals of holomorphic forms on X over cycles in X.
In principle, the equations of X determine the periods of X. In practice, it is hard to compute the periods given an equation and vice versa. We will talk about how one can determine the periods of X by first computing the periods of a more favorable X' and deforming X' to X, keeping track of the change in periods via the so called Picard-Fuchs equations.

Knots appear frequently in linear polymers. The problem of their presence in DNA is resolved by specific enzymes that cut the DNA chain to restore a funct ional topology. However, in general, for synthetic ring polymers the topology is fixed. We study the effects and the universal statistics of configurations with fixed knots, using simple lattice models. It turns out that such statistics sensibly depends on the phase: knots are relatively rare and localized in swollen polymers, while they are more frequent and delocalized in collapsed globules. These results are linked with an interesting thermodynamic behaviour: in the competition for the length between two loops in which a collapsed ring is divided by means of a slip link, the knots enclosed in each loop determine a sort of “topological tension” that pulls the chain on the side of the more complex knot.