Many projective varieties are ideal-theoretically cut out by quadratic polynomials of rank less than or equal to 4. Classical constructions in projective geometry like rational normal scrolls and Segre-Veronese varieties are examples. Regarding this phenomenon, I would like to talk about the following two results in this talk. First, there are many projective varieties cut out ideal-theoretically by quadratic polynomials of rank 3. Second, there is a nice structure of the locus of rank 3 quadratic equations of a projective variety.
The Pythagoras number of a projective variety is defined as the minimum number of squares required to represent (nonnegative) quadratic forms on the real homogeneous coordinate ring of the variety as a sum of squares. It was introduced by Blekherman, Sinn, Smith, and Velasco in 2022. To investigate this semi-algebraic quantity, they introduced and analyzed various algebraic invariants, including quadratic persistence, that establish bounds on the Pythagoras number. In particular, they found intriguing equivalences between algebraic and semi-algebraic invariants for some classes of varieties, such as varieties of minimal degree or arithmetically Cohen-Macaulay varieties of almost minimal degree, known as del Pezzo varieties.
We investigate quadratic persistence and Pythagoras numbers in the next class of varieties after those listed above, including some non-arithmetically Cohen-Macaulay varieties and those where the quadratic persistence equals the codimension minus two. We mainly focus on curves C such that deg(C) = codim(C) + 3 in this talk, but we will discuss further variety classes if time permits. This talk is based on joint work with Euisung Park and Jong In Han.
Positivstellensatz is an important result in real algebraic geometry which guarantees that a polynomial strictly positive on a compact semialgebraic set has a representation as an "obviously nonnegative" polynomial via a sum of squares certificate. This certificate is computationally tractable via semidefinite programming (SDP), but the size of the SDP is dependent on the certificate's degree. It is important to understand how the degree of the certificate depends on the minimum of the positive polynomial on the semialgebraic set. There has been a recent flurry of activity providing upper bounds on the degree, but comparatively little is known about lower bounds. I will present some ideas on how to use tropical techniques to prove lower bounds. This is work in progress, and I hope the audience will help it progress even further.
This is a preliminary lecture that aims to give a broad overview into some of the concepts and tools that analytic number theorists and arithmetic geometers have at their disposal viz. modular forms and their generalizations, and L-functions attached to various objects. Their mathematical importance is difficult to be overstated. However, these tools, techniques and concepts have applications to a host of problems in mathematical physics too (not necessarily string theory). One such application is in the computation of certain Feynman integrals and scattering amplitudes.
The goal of this lecture is threefold: The primary focus of this lecture will a very rudimentary and (hopefully) computational introduction to modular forms, L functions of elliptic curves and rational point counts. But the ulterior motive (pun intended) is to build towards a lecture series and/or seminar series, for which some have already expressed interest. More details to be discussed.To introduce a few more dangerous words and related words to Bernd's list.To build towards understanding (in an arithmetic setting) some of the dangerous words in Bernd's list.
The postings for next week's workshop point to many interesting concepts, like graph polynomial, cluster algebra, computer algebra, cyclic polytope, GKZ system, Calabi-Yau manifold, and cosmological correlator. I will illustrate these seven topics by way of examples, and I will encourage further discussions.
The classical Chow form encodes any projective variety by one equation. We here introduce the Chow-Lam form for subvarieties of a Grassmannian. By evaluating the Chow-Lam form at twistor coordinates, we obtain universal projection formulas. These were pioneered by Thomas Lam for positroid varieties in the study of amplituhedra, and we develop his approach further. Universal formulas for branch loci are obtained from Hurwitz-Lam forms. Our focus is on computations and applications in geometry.
A compact group $A$ is called an amalgamation basis if, for every way of embedding $A$ into compact groups $B$ and $C$, there exist a compact group $D$ and embeddings $B\to D$ and $C\to D$ that agree on the image of $A$. Bergman in a 1987 paper studied the question of which groups can be amalgamation bases. A fundamental question that is still open is whether the circle group $S^1$ is an amalgamation basis in the category of compact Lie groups. Further reduction shows that it suffices to take $B$ and $C$ to be the special unitary groups. In our work, we focus on the case when $B$ and $C$ are the special unitary group in dimension three. We reformulate the amalgamation question into an algebraic question of constructing specific Schur-positive symmetric polynomials and use integer linear programming to compute the amalgamation. We conjecture that $S^1$ is an amalgamation basis based on our data. This is joint work with Michael Joswig, Mario Kummer, and Andreas Thom.
The existence of Kähler-Einstein (KE) metrics on Fano manifolds is a long-standing problem in algebraic geometry. Since the solution to the Yau-Tian-Donaldson conjecture, the existence of a KE metric on a Fano manifold $X$ has been proven to be equivalent to the K-polystabilty of $X$. In particular, the problem of the existence of a KE metric has been translated in algebro-geometric terms. In my talk, I will explain how the Abban-Zhuang theory can be used to prove K-stability of Fano varieties and apply it to prove the K-stability of certain Fano 3-folds obtained as blow-up of $\mathbb P^3$ in a curve. Everything is based on a joint work with Tiago Duarte Guerreiro and Nivedita Viswanathan.
Although the Hilbert schemes of points are the easiest example of Hilbert schemes, there are many open problems concerning them. In this seminar I will state some of them and I will explain some recent progress in their solution and some possible strategy to attack them. I will mainly focus on the case of smooth threefold since they are the main objects of my research.
- Abishek Deshpande: TBA- Yassine El-Maazouz: Orthogonal p-adic measures on Schur modules- Oskar Henriksson: Counting steady states with tropical and toric methods- Birte Ostermann: Exploring speedups for traveling salesman problem- Kemal Rose: Tropical implicitization- Marina Garrote Lopez: A discovery algorithm for cyclic causal graphs under non-Gaussian linear models
Máté László Telek: Reaction networks and signed faces of the Newton polytopeNidhi Kaihnsa: Coexistence of multistationarity and ACRSvala Sverrisdottir: A polynomial system from quantum chemistryZaïneb Bel-Afia: Chebyshev curvesEric Boniface: Tropical maximum likelihood estimationBeatriz Pascual Escudero: The geometry of steady state varieties
We are interested in a combinatorial question related to the ongoing interest in understanding the complexity of integer linear programming with bounded subdeterminants.
Given a positive integer Delta and a full-rank integer matrix A with m rows such that the absolute value of every m-by-m minor of A is bounded by Delta, at most how many pairwise distinct columns can A have? The case Delta = 1 is the classical result of Heller (1957) saying that the maximal number of pairwise distinct columns of a totally unimodular integer matrix with m rows is m^2 + m + 1. If m >= 3 and Delta >= 3, the precise answer to this combinatorial question is not known, but bounds of order O(Delta^2 m^2) have been proven by Lee et al. (2021). In the talk, I will discuss our approach to the problem which rests on counting columns of A by residue classes of a suitably defined integer lattice. As a result, we obtain the first bound that is linear in Delta and polynomial (of low degree) in the dimension m.
This is joint work with Gennadiy Averkov.
Fullerenes are polyhedral molecules made of carbon atoms, first discovered in 1985. There is a simple mathematical model of them, as a convex polyhedron all of whose faces are hexagons and pentagons. During their wave of popularity in the 1990s, mathematical chemists developed several (slow) algorithms to enumerate them, and asked how many isotopes exist with a given number of carbon atoms. In joint work with Phil Engel, we show that these numbers are the Fourier coefficients of an Eisenstein series, and find an explicit formula. In this talk I will try to tell some of the chemistry story, as well as explaining the math.
Over the last couple of decades, the theory of interacting particle systems has found some unexpected connections to orthogonal polynomials, symmetric functions, and various combinatorial structures. The asymmetric simple exclusion process (ASEP) has played a central role in this study: recently, it was discovered that its probabilities can be written as specializations of Macdonald polynomials, which are an important family of symmetric functions with parameters q and t that generalize the Schur functions. In this talk, we describe a combinatorial model called a "multiline queue" that illuminates this remarkable connection. One one hand, multiline queues give a new formula for symmetric Macdonald polynomials, and on the other hand they encode the dynamics of the ASEP and give formulas for its stationary distribution. As an application, we consider a specialization of multiline queues as a natural setting to recover some classical results in symmetric function theory.
Mustafin varieties are degeneration of projective spaces induced by finite subsets of a Bruhat-Tits building. They are related to configuration spaces, matroid subdivisions, tropical compactifications, and even to multiview problems in computer vision. I will report on my 2011 article on Mustafin varieties, jointly with Dusțin Cartwright, Matthias Haebich and Annette Werner, and I will point to more recent work in on this topic.
These days the colliders are generating experimental data with great precision. To compare with the experiments, one needs to have more theoretical calculations in perturbative QCD. In this talk, I will introduce some traditional methods for evaluating Feynman integrals and shortly present our recent paper on 2-loop 6-point integrals calculations. In this paper, we found the UT basis of 2-loop 6-point integrals and explored the possible functions of the 6-point scattering process.
We discuss a system of 16 quadratic equations in 24 variables that arises in the study of Lie algebras. The solutions are the Lie algebra structures on a 4-dimensional vector space. There are four irreducible components of dimension 11. We compute their degrees and Hilbert polynomials, and thereby answer a 1999 question by Kirillov and Neretin. This is joint work with Laurent Manivel and Svala Sverrisdottir.
It is commonly accepted that many biological networks, such as gene regulatory or metabolic networks, are “modular,” in the sense that networks exhibit a structure of regions in which nodes are highly connected, called modules, with sparse connections between modules. Versions of such “community structures” can be found throughout the complex systems field. Since networks in molecular biology are often dynamic by nature, the natural question arises whether and how structural modularity is reflected in network dynamics. And if it is, then this modular structure should be recognizable in the dynamic models we frequently use to describe them. One common type of dynamic model in systems biology is that of Boolean networks, characterizing dynamics in terms of logical rules of regulation. This talk will propose, for the Boolean network framework, a definition of module, present a decomposition theorem that characterizes the modular structure of Boolean networks, and show that this decomposition theorem induces a decomposition of the resulting network dynamics in terms of the dynamics of the structural modules. The talk will also discuss some related open mathematical problems. The preprint https://arxiv.org/abs/2206.04217 contains details.
Paul Breiding was awarder the Emmy Noether Research Group Grant in 2020. In this informal meeting he will share with us his experience on the grant writing procedure. Then there will be time for questions and discussions.
Point correspondences of pinhole cameras is a fundamental study of algebraic vision, which is the study of algebraic geometry for reconstruction problems in computer vision. These correspondences are well-understood by now. More recently, line correspondences were characterized mathematically. In this talk, we generalize this study to correspondences of arbitrary k-planes in P^n. The geometry of this problem is interesting by itself, but also sees potential applications in modeling more difficult reconstruction problems, and for understanding certain determinantal varieties.
Autoregulatory processes are present in several biochemical systems and their main characteristic is the capacity to mitigate the effect of external input on the system. When these processes are modeled with Chemical Reaction Networks, the robustness with respect to external input is called Absolute Concentration Robustness (ACR), and under mass action kinetics, detecting ACR corresponds to deciding whether the real "positive" part of an algebraic variety is contained in a hyperplane parallel to a coordinate hyperplane. In this talk, I will introduce the concept of ACR, give an overview of available tools for detecting ACR, and then focus on detecting ACR in small networks. For networks with low-dimensional stoichiometric space, we provide a full classification of the networks that have ACR for every set of parameters. This is joint work with Anne Shiu and Nicolette Meshkat.
Given a procedure that associates to a combinatorial object x a geometric object X, one can ask how complicated the geometric objects obtainable by this construction can be. A *universality theorem* is a precise formulation of "as complicated as it could possibly be". That is, the combinatorial objects x cause the spaces X to exhibit all (bad) geometric features, in a certain sense.Theorems of this form abound in non-linear algebra. They hold for realization spaces of matroids and polytopes in discrete geometry, for totally mixed Nash equilibria in economics, for conditional independence models in statistics. Or, as Ravi Vakil once put it: Murphy's law holds in algebraic geometry [https://arxiv.org/abs/math/0411469].In this talk I want to present the surprisingly simple geometric idea behind all of these proofs and, time permitting, discuss a conjectured universality theorem that I have so far failed to prove.
An n-person game is specified by n tensors of the same format. Its equilibria are points in that tensor space. Dependency equilibria satisfy linear constraints on conditional probabilities. These cut out the Spohn variety, named after the philosopher who introduced the concept. Nash equilibria are tensors of rank one. We discuss the real algebraic geometry of the Spohn variety and its payoff map, with emphasis on connections to oriented matroids and algebraic statistics.
In this mini event Claudia Fevola, Christiane Görgen, Marie Brandenburg and Chiara Meroni will give a hands-on introduction to our institute's online repository mathrepo (mathrepo.mis.mpg.de). We show you the homepage and its functionalities and we will take you through the steps of uploading your own material. If you have ever found yourself wondering how to handle that extra example, algorithm or piece of code which didn't quite make it into the print version of your article but would make your research so much more valuable for future researchers, this is for you! Stop uploading stuff on your personal webpages only but grab your laptops, come along, try this out and ask all the questions on your mind.
In this seminar, Yassine El Maazouz, Claudia Fevola, Leonid Monin and Rosa Winter discuss the questions "What can Oscar do for me, and what I can do for Oscar?" and "How am I using MathRepo, and what could be improved?".
The main goal is to determine the (co)homology groups of complex surfaces in projective three-space. We will also discuss the usage of low dimensional caricatures to think effectively with complex curves and surfaces.
The success of CNNs in image recognition is attributed to weight sharing and their structural compatibility with image data (approximate translation invariance, perceptive field, ..). Aiming for a structure that is relevant to sequential data (and is sharing weights), we lay the mathematical groundwork by extending the notion of quasisymmetric functions to semirings. All objects will be introduced and the talk will be self-contained. I briefly outline how this can fit into a deep learning pipeline.
This is joint work with Kurusch Ebrahimi-Fard (NTNU Trondheim) and Nikolas Tapia (WIAS Berlin).
In this talk, I will consider two metrics for (different types of) trees: the Gromov-Hausdorff distance for metric trees and the interleaving distance for the so-called merge trees (used in applications e.g, graphics and visualization). The Gromov-Hausdorff distance is a natural way to measure the distortion between two metric spaces. However, to date, the algorithmic development for computing and approximating Gromov-Hausdorff distance has been limited. In this talk I will focus on computing Gromov-Hausdorff distance for metric trees. Unfortunatley, we show that it is NP hard to approximate it within a factor of 3 even for the case where the underlying trees have unit edge length. On the other hand, interestingly, it turns out that this distance relates to the so-called interleaving distance for merge trees stemmed from the field of computational and applied topology. I will describe this connection, and show how to derive a fixed-parameter-tractable (FPT) algorithm for computing the interleaving distance which in terms gives rise to an approximation algorithm for the Gromov-Hausdorff distance for general metric trees.
In this talk, we introduce the tropical hyperfield and ordered blueprints. We motivate how to use this formalism to formulate tropical scheme theory, and point out its advantages over other approaches.
The talk will present a simple way to construct a universal Groebner basis of a polynomial ideal which serves as a Groebner basis for all admissible term orderings. A byproduct is the approach is a new way for basis conversion of a Groebner basis for one ordering to another ordering irrespective of dimensionality of the associated ideal. This construction of a universal Groebner basis is integrated with parametric (more popularly called comprehensive) Groebner basis which serves as a Groebner basis for all possible specializations of parameters. Two related approaches will be presented. First one extends Kapur's algorithm for computing a parametric Groebner basis in which along with branching based on making the head coefficient nonzero or not, branching on ordering constraints is also done in order first to choose a term that could serve as the head term. The second one is based on the Kapur, Sun and Wang's algorithm for computing comprehensive Groebner basis and system but uses a reduced universal Groebner basis to generate a universal parametric Groebner basis. The result of these algorithm is a mega Groebner basis that works for every admissible ordering as well as for any specialization of parameters.
Consider a situation in which a set of n random variables X1...Xn carry information about some random variable of interest Y. For example, Y could represent an external stimulus while X1...Xn could represent the activity of n brain regions; alternatively, Y could be the output of a digital logic gate while X1...Xn could be the n inputs of the gate. Recent work in information theory has considered the problem of quantifying the amount of redundancy among X1...Xn about Y, i.e., the amount of information about Y that is found individually within all X1...Xn. Several approaches to quantifying redundancy have been proposed, but most lack a clear operational motivation, or only apply in the case of two X variables. Here we propose a novel measure of redundancy which applies to an arbitrary number of variables. We show that our measure has decision-theoretic, set-algebraic, geometric, and axiomatic interpretations. Our approach also sheds new light on the difficulties encountered by existing ways of quantifying redundant and synergistic information.
History and Philosophy of Science (HPS) defends ideas about the methods of science based on historical evidence. The received HPS (propagated for instance by Popper, Kuhn, and Lakatos) supported their ideas by each selecting favorable historical episodes of science which they reinterpreted according to their own views. Because of biased case selections and very liberal interpretations, they lost historical evidence. In contrast, evidence-based HPS transforms the ideas of speculative HPS into operational categories and draws random samples from the history of science for statistical analysis, largely applying empirical methods of the social sciences. I illustrate the approach by an analysis of synthetic chemistry, a field that has remained incomprehensible and neglected by the received HPS, despite its all-dominating size.
ReferencesSchummer: J.: “Scientometric Studies on Chemistry I: The Exponential Growth of Chemical Substances, 1800-1995”, Scientometrics, 39 (1997), 107-123. Schummer: J.: “Scientometric Studies on Chemistry II: Aims and Methods of Producing new Chemical Substances”, Scientometrics, 39 (1997), 125-140. Schummer: J.: “Why do Chemists Perform Experiments?”, in: D. Sobczynska, P. Zeidler & E. Zielonacka-Lis (eds.), Chemistry in the Philosophical Melting Pot, Frankfurt: Peter Lang, 2004, pp. 395-410.
Neural networks can be trained to perform functions, such as classifying images. The usual description of this process involves keywords like neural architecture, activation function, cost function, back propagation, training data, weights and biases, and weight-tying.
In this talk we will define a symmetric monoidal category Learn, in which objects are sets and morphisms are roughly "functions that adapt to training data". The back propagation algorithm can then be viewed as a strong monoidal functor from a category of parameterized functions between Euclidean spaces to our category Learn.
This presentation is algebraic, not algorithmic; in particular it does not give immediate insight into improving the speed or accuracy of neural networks. The point of the talk is simply to articulate the various structures that one observes in this subject—including all the keywords mentioned above—and thereby get a categorical foothold for further study. For example, by articulating the structure in this way, we find functorial connections to the well-established category of lenses in database theory and the much more recent category of compositional economic games.
Complex Networks are popular means for studying a wide variety of systems across the social and natural sciences. Recent technological advances allow for a description of these systems on an unprecedented scale. However, due to the immense size and complexity of the resulting networks, efficient evaluation remains a data-analytic challenge. In a recent series of articles, we developed geometric tools for efficiently analyzing the structure and evolution of complex networks. The core component of our theory, a discrete Ricci curvature, translates central tools from differential geometry to the discrete realm. With these tools, we extend the commonly used node-based approach to include edge-based information such as edge weights and directionality for a more comprehensive network characterization.The analysis of a wide range of complex networks suggests connections between curvature and higher order network structure. Our results identify important structural features, including long-range connections of high curvature acting as bridges between major network components. Thus, curvature identifies the network’s core structure on which expensive network hypothesis testing and further network analysis becomes more feasible. We will discuss an application of curvature-based methods to networks constructed from fMRI data. Joint work with J. Jost (MPI MIS) and E. Saucan (Technion).
In some approaches to the reconstruction of phylogenetic trees, Markov chain Monte Carlo methods are used. These in turn use simple base-chains on the set of (binary) trees of a given size $N$. It is at least of mathematical interest (but might also help to understand properties of such Markov chains when the trees are large) to consider limit processes as $N$ tends to infinity and the time is suitably sped up. Here, we have to decide in which state space we are working, i.e., what kind of objects we want to consider as "continuum trees" in the limit, and what we mean by "limit".One by now almost-classical approach is to work in a space of metric measure spaces, but while it has proven successful in some situations, it seems difficult to prove convergence in others. Motivated by a particular Markov chain, the Aldous chain on cladograms, where existence of a limit process has been conjectured almost two decades ago, we introduce an alternative state space. We define the objects by a "tree structure" (formalized by a branch-point map) instead of a metric structure and call them algebraic measure trees. In this new state space, we are able to prove convergence of the Aldous chain to a limit process with continuous paths. (joint work with Anita Winter and Leonid Mytnik)
Partial differential equations (PDEs) are among the most universal tools used in modelling problems in nature and man-made complex systems. In particular, PDEs are a fundamental tool in portfolio optimization problems and in the state-of-the-art pricing and hedging of financial derivatives. The PDEs appearing in such financial engineering applications are often high dimensional as the dimensionality of the PDE corresponds to the number of financial asserts in the involved hedging portfolio. Such PDEs can typically not be solved explicitly and developing efficient numerical algorithms for high dimensional PDEs is one of the most challenging tasks in applied mathematics. As is well-known, the difficulty lies in the so-called "curse of dimensionality" in the sense that the computational effort of standard approximation algorithms grows exponentially in the dimension of the considered PDE and there is only a very limited number of cases where a practical PDE approximation algorithm with a computational effort which grows at most polynomially in the PDE dimension has been developed. In the case of linear parabolic PDEs the curse of dimensionality can be overcome by means of stochastic approximation algorithms and the Feynman-Kac formula. We first review some results for stochastic approximation algorithms for linear PDEs and, thereafter, we present a stochastic approximation algorithm for high dimensional nonlinear PDEs whose key ingredients are deep artificial neural networks, which are widely used in data science applications. Numerical results illustrate the efficiency and the accuracy of the proposed stochastic approximation algorithm in the cases of several high dimensional nonlinear PDEs from finance and physics.
Neural population equations such as Wilson-Cowan equations, neural mass or field models are widely used to model brain activity on the mesoscopic or macroscopic scale. It is not known, however, how such large-scale models relate to underlying biophysical properties at the microscopic scale. By principled coarse-graining, we here derive stochastic population equations at the mesoscopic level starting from a microscopic network model of biologically-verified spiking neurons. We solved the critical problem of treating both fluctuations of the population activity due to the finite number of neurons and pronounced spike-history effects of single-neurons such as refractoriness and adaptation. The derived mesoscopic dynamics accurately reproduces the mesoscopic activity obtained from a direct simulations of the microscopic spiking neural network model. Going beyond classical mean-field theories for infinite N, our finite-N theory describes new emerging phenomena such as stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. We use the mesoscopic equations to efficiently simulate a model of a cortical microcircuit consisting of eight neuron types. Our theory opens a general framework for modeling and analyzing finite-size neural population dynamics based on realistic single cell and synapse parameters.
Reference: T. Schwalger, M. Deger, and W. Gerstner. PLoS Comput. Biol., 13(4):e1005507, 2017. http://dx.doi.org/10.1371/journal.pcbi.1005507
I will explain how, using elementary linear algebra only, one can get solutions of two widely known problems on nonnegative matrices. The first one is a short proof of the known result stating that the nonnegative rank is NP-hard to compute, and the second one is an answer to the question on rational nonnegative factorizations asked by Cohen and Rothblum in 1993. I will explain how to generalize the results on spaces of factorizations and completely describe the algorithmic complexity not only of the nonnegative rank of a matrix but also of the related quantity known as positive semidefinite rank.
Following the definition of entanglement as a resource for non-local tasks, as a consequence being quantified, the time evolution of this quantity has been the subject of intense interest. For instance, for practical implementations of quantum information protocols, which require certain quantity of entanglement, it is valuable, therefore, to understand how the amount of such a resource behaves in time. One peculiar characteristic of entanglement dynamics that has recently drawn a great deal of attention is the possibility of an initially entangled state to lose all its entanglement in a finite time, instead of only asymptotically. Such an aspect was initially called entanglement sudden death, or finite time disentanglement (FTD).
It has already been explored how typical this phenomenon is when one varies the initial states for several paradigmatic dynamics of two qubits and two harmonic oscillators. In this seminar I would like to discuss results, and some open questions, regarding the counterpart of this point of view, i.e. given a dynamics for a composite quantum system, should one expect to find some initially entangled state which exhibits FTD? As we will see, the answer is typically affirmative, in the sense that almost all quantum dynamical mappings do exhibit FTD, and its explanation relies strongly on the topology of the space state.
There are very, very many four-dimensional smooth manifolds. One way to study them is by defining invariants for them. Many invariants come from Topological Quantum Field Theories (TQFTs), which are very interesting from the physical perspective, since it is generally assumed that we all live on a four-dimensional manifold. In the 90s, the Crane-Yetter TQFT was defined in an attempt to understand quantum gravity as a TQFT. It was not very successful, and the resulting Crane-Yetter invariant was assumed to be quite trivial. Recently, I have worked out that it actually isn't as trivial as expected, since it contains at least finite gauge theory.
In this talk, I will explain how to make your own 4-manifold from simple building blocks, called "handles". I will mostly draw lots of nice pictures. Then, I'll give these pictures a semantics in category theory, more specifically, in ribbon fusion categories. At the end, I will bring everything together and show you how you can easily evaluate invariants of 4-manifolds yourself.
When studying entropy functions of multivariate probability distributions, polymatroids and matroids emerge. Entropy functions of pure multiparty quantum states give rise to analogous notions, called polyquantoids and quantoids. Polymatroids and polyquantoids are related via linear mappings and duality. Quantum secret sharing schemes that are ideal can be described by selfdual matroids. Expansions of integer polyquantoids to quantoids will be presented and linked to that of polymatroids.
Ranking theory is the first legitimate sister of probability theory. That is, it is a dynamic theory of belief and as such of comparable fundamentality and generality to Bayesianism. Ranks also are degrees of belief ranging from - to + infinity, behaving in many ways similar and in characteristic ways unlike probabilities; in particular, positive ranks adequately represent belief and distinguish more or less firm beliefs (negative ranks correspondingly express disbelief). Moreover, in contrast to other theories of belief it can fully account for the dynamics of belief, a task equivalent to answering the problem of induction. The talk will explain some basic features and some surprising applications of ranking theory.
We discuss recent results on the approximation of solutions of PDEs with parameter-dependent or stochastic coefficients, as well as results and open questions concerning corresponding numerical methods and their computational complexity. In particular, the presented findings reveal mechanisms behind the efficiency of certain model reduction methods and show the influence of the choice of representations of random fields on the approximability of resulting solutions of PDEs.
My talk is about a topic from non-equilibrium quantum mechanics. The starting point is the Schrödinger equation with pair interaction for fermions. I then introduce and discuss different scaling limits which are of a mean-field type. In these scaling limits, it is expected that the microscopic Schrödinger dynamics can be approximated by the timedependent Hartree-Fock equation. This is an effective, non-linear equation with much fewer degrees of freedom. I present the results of coworkers and myself on the rigorous derivation of the Hartree-Fock equation. Several new methods were recently developed for tackling this kind of problems and I will focus on introducing the one I used myself in recent works. This method is based on controlling a well-chosen functional by estimating certain transition amplitudes in the Fermi gas. The functional is directly related to the trace norm difference of reduced density matrices and the estimates lead to explicit rates of convergence.
Modeling asset prices via a stochastic volatility approach is vastly popular as these models capture many stylized facts of volatility dynamics, like long-term memory or the leverage effect. Essentially, stochastic volatility models are two-dimensional diffusion processes: the first process captures the dynamics of the random volatility; the second one couples to the volatility driving the returns of the asset. Even though there is an extensive study on deriving properties or even closed, analytic formulae of the distribution of the returns conditioned on the volatility, nearly nothing is known about the conditional distribution the distribution of the volatility if returns are known. But, precisely this distribution expresses our best estimate of the hidden volatility which needs to be inferred from market data. Here, we quantify the uncertainty left about the volatility if the returns are known by the Shannon entropy. In turn, we investigate how much information, the observed market returns actually provide about volatility.
This motivates a careful numerical and analytical study of information theoretic properties of the Heston model. In addition, we study a general class of discrete time models where the volatility process is generated by a Gaussian process. These types of models are motivated from a machine learning perspective and we fit them to different data sets. Using Bayesian model selection, we investigate the influence of structural priors on the volatility dynamics. As before, we interpret our findings from an information theoretic perspective showing that observed market returns are not very informative about the volatility. From these observations, we conclude that the evidence for some of the major stylized facts on volatility dynamics is actually quite weak and prior knowledge might play a major role in volatility modeling.
Finding network modules (or communities, clusters) is a challenging task, especially when modules do not form a full decomposition of a network. In recent years many approaches for finding fuzzy network partitions have been developed, but most of them focus only on undirected networks. Approaches for finding modules in directed networks are usually based on network symmetrization which tend to ignore important directional information by considering only one-step, one-directional node connections.
In this talk I will present our novel random-walk based approach for fuzzy module identification in directed, weighted networks coming from time series data, where edge directions play a crucial role in defining how well nodes in a module are inter-connected. Our method introduces a novel measure of communication between nodes using multi-step, bidirectional transitions encoded by a cycle decomposition of the probability flow. Symmetric properties of this measure enable us to construct an undirected graph that captures information ow of the original graph seen by the data and apply clustering methods designed for undirected graphs. Additionally, we will show that spectral properties of a random-walk process on this new graph are related to the ones of the random-walk process defined on the adjoint cyclic graph (which can be seen as a generalization of line graphs used for fuzzy partitioning of undirected networks). Finally, we will demonstrate how our algorithm can be applied to analyzing earthquake time series data, which naturally induce (time-)directed networks.
Combinatorial Cheeger inequalities for a graph relate a combinatorial Cheeger constant with the spectral gap of the graph. Some of these isoperimetric inequalities can also be generalized to hypergraphs or simplicial complexes. We present an information-theoretic version of such inequalities for graphs and simplicial complexes by introducing information-theoretic Cheeger constants. For this we use the Kullback-Leibler divergence between corresponding hierarchical models as an information measure between different simplicial complexes and derive upper and lower bounds for this information measure. Finally we can relate the information-theoretic Cheeger constants with the spectral gap of simplicial complexes.
I will present a new approach to foundations of quantum theory that joins the insights of the maximum entropy approach to statistical mechanics (Jaynes, Ingarden), the information geometric approach to statistical inference (Chencov, Amari), and the operator algebraic approach to quantum theory (Araki, Haag). Nonlinear quantum kinematics is constructed using quantum information geometric structures over normal states on W*-algebras (quantum relative entropies and Banach Lie-Poisson structure), and replaces the foundational role played in von Neumann's quantum mechanics by probability theory, spectral paradigm, and Hilbert spaces. Nonlinear quantum dynamics is defined as constrained relative entropy maximisation (generalising Lüders’ rule) composed with nonlinear hamiltonian flow (generalising unitary evolution). It provides a prescription of noncommutative causal inference, and a nonlinear nonmarkovian alternative to the paradigm of completely positive instruments. Noncommutative Orlicz spaces play a special role in this new quantum kinematics and dynamics.
For a collection of random variables, the entropies of all subcollections form an entropic point. The points give rise to the entropy region. The region and its closure are far from being understood. The talk will review related open problems with algebraic flavor.
Squashed entanglement [Christandl/AW, JMP 45:829 (2004)] is a seemingly blessed measure of bipartite entanglement: it is and upper bound both on the distillable entanglement and the distillable key in a state, it is convex in the state, additive and monogamous. The latter property endears it to many but it implies that it is very small for highly extendible states, such as O(1/d) for the dxd fully antisymmetric state. Indeed, it was only a few years ago that faithfulness was proved, i.e. positivity on all entangled (=non-separable) states [Brandao et al., CMP 306:805 (2011)]. Using a new result of Fawzi and Renner [arXiv:1410.0664], which proves a version of a conjecture of the speaker from 2008 (later refined by Isaac Kim and Berta et al. [arXiv:1403.6102]), we show the converse of the above observation: small squashed entanglement of a state implies that it is close to a highly extendable one. This should be contrasted with entanglement of formation, which is small if and only if the state is close to a separable one. As is well-known, k-extendibility approximates separability as k grows, and in this way the faithfulness statement of Brandao et al. follows.Fawzi/Renner's result is a characterization of approximate quantum Markov chains, i.e. states $rho_{ABC}$ with $I(A:C|B) < epsilon$, showing that there exists a cptp map $R:B-->BC$ with $-log F[R(rho_{AB}),rho_{ABC}] < epsilon$, where $F[]$ is the fidelity between two states. This map is a variation of a "recovery" map introduced in the quantum setting by Petz [CMP 105:123 (1986)]. I will show a much stronger and more general characterization in the classical case using the Petz recovery map (aka "transpose channel") and ask whether it has a quantum analogue.
To recognize nonlinear structure in data sets, certain algorithms, such as the Eigenmaps and Diffusion Maps algorithms, use eigenfunctions of the Laplace operator on data graphs to embed the data in a low-dimensional Euclidean space. We study continuum versions of these algorithms. In particular, we derive bounds on their complexity, in that we show that the number of eigenfunctions needed only depends on the dimension, a Ricci curvature lower bound, the injectivity radius and the volume. To get an explicit bound, we need a quantitative estimate on the $C^\alpha$ Harmonic Radius that was introduced by Anderson and Cheeger.
Humans can process concepts. They can learn, store and retrieve discrete memory items, connect them by logical operations, classify sensor input in terms of categories, attach symbolic labels to represented items, and carry out so many more fascinating "high-level" information processing operations. Humans do this with their brains, and these brains are dynamical systems of supreme complexity – nonlinear, high-dimensional, stochastic, multiscale, adaptive – all in one. Since decades this has fuelled a scientific quest to understand how neurodynamical systems can support conceptual information processing. This question has been approached from many angles, using a wealth of methods and levels of description and analysis. In my talk I will outline yet another approach to model how conceptual information processing can arise in the dynamics of recurrent neural networks. The core of this approach is to employ certain linear operators, called conceptors, which constrain the evolving dynamics of a recurrent neural network. These operators can be identified with "concepts" represented in the ongoing neural dynamics. Conceptors can be morphed and combined with Boolean operations. This endows recurrent neural networks with mechanisms to store and retrieve, generate, logically combine, morph, abstract and focus dynamical patterns. I will give an intuitive introduction to the formal theory of conceptor dynamics, and present a set of exemplary simulation studies.
Many convex optimization problems have been solved through elegant transformations into fixed point problems. The aim of this talk is to present a useful toolbook of nonexpansive type mappings for constrained extremum problems in the setting of CAT(0) spaces.
One of the remarkable feats of intelligent life is that it restructures the world it lives in for its own benefit. Beavers build dams for shelter and to produce better hunting grounds, bees build hives for shelter and storage, humans have transformed the world in a multitude of ways. Intelligence is not only the ability to produce the right reaction to a randomly changing environment, but is also about actively influencing the change in the environment, leaving artefacts and structures that provide benefits in the future.
In this talk, I want to explore if the framework of intrinsic motivation can help us understand and possibly reproduce this phenomenon. In particular, I will show some simple, exploratory results on how empowerment, as one example of intrinsic motivation, can produce structures in the environment of an agent.
The basic idea behind intrinsic motivation is that an agent's behaviour, or decision making, is not guided by some form of externally specified reward, but rather by the optimization of some agent-internal measurement, which then gives rise to complex and rich interactions with the world. A quantitative formulation of an intrinsic motivation should ideally be computable from an agent's perspective, be applicable to different sensory-motoric configurations, and should reflect different agent embodiments. One classic example of intrinsic motivation is Schmidhuber's artificial curiosity, where an agent acts in a way so that it learns the most about the environment. Other examples include Homeokinesis, or the predictive information framework.
Here we want to focus on empowerment, formally defined as the channel capacity between an agent's actuators, and its own sensors at a later time. This measures the potential causal flow in an agent's action-perception loop, and can be thought of as an abstract measure of how much control an agent has over the world it can perceive. The more meaningful options an agent has to influence the world, the higher is its empowerment. I will show some early results on how an empowerment driven agent will manipulated a 3-d gridworld and how these manipulations reflects the agent's morphological configuration.
We introduce and study norms in the space of hermitian matrices, obtained from base norms in positively generated subspaces. These norms are closely related to discrimination of so-called generalized quantum channels, including quantum states, channels and networks. We further introduce generalized quantum decision problems and show that the maximal average payoff of decision procedures is again given by these norms. We also study optimality of decision procedures.
Local cell populations across the different layers of a cortical column display characteristic activity patterns during spontaneous activity and in response to visual stimulation. I will discuss different measures to analyze simultaneously recorded spike trains and illustrate their utility to understand data from anesthetized cats. Characterizations based on individual spike trains quantify fundamental properties such as regularity or selectivity with respect to the direction of motion. However, to capture how local populations in the different layers process and transmit sensory information, their joint response statistics are required. I will present results describing statistical dependencies between the responses of multiple cells during spontaneous activity and during stimulation with drifting gratings and natural movie scenes.
Distributed computation in artificial life and complex systems is often described in terms of component operations on information: information storage, transfer and modification. Information modification remains poorly described however, with the popularly-understood examples of glider and particle collisions in cellular automata being only quantitatively identified to date using a heuristic (separable information) rather than a proper information-theoretic measure. We outline how a recently-introduced axiomatic framework for measuring information redundancy and synergy, called partial information decomposition, can be applied to a perspective of distributed computation in order to quantify component operations on information. Using this framework, we propose a new measure of information modification that captures the intuitive understanding of information modification events as those involving interactions between two or more information sources. We also consider how the local dynamics of information modification in space and time could be measured, and suggest a new axiom that redundancy measures would need to meet in order to make such local measurements. Finally, we evaluate the potential for existing redundancy measures to meet this localizability axiom.
Trees appear frequently in biology, for instance as transportation systems to distribute blood, water, air or signals. These transportation systems carry information about the organs they are part of, which makes the comparison of tree-structures from different individuals an important problem.
We shall discuss a mathematical framework for comparing trees whose branches have properties such as branch length or shape, described by an element of a vector space. Such trees will be called 'geometric trees'. We build a space of geometric trees in which trees with different topologies, sizes and branch properties can all be found, and show that this tree-space is a geodesic space in which statistical properties can be defined with the help of geodesics. We discuss how, in its most general form, this tree-space is somewhat ill-behaved, and "regularize" the tree-space by labeling the trees. This "regularization" is based on a discussion of how the unlabeled tree-space relates to the space of phylogenetic trees defined by Billera, Holmes and Vogtmann (2001). We finish by discussing some recent results, both theoretical and experimental, on statistical analysis of labeled airway trees from human lungs.
The presented work is carried out in collaboration with Megan Owen, Sean Skwerer, Francois Lauze, Jens Petersen, Mads Nielsen and Marleen de Bruijne.
Optimal Morse matchings reveal essential structures of cell complexes which lead to powerful tools to study discrete 3-manifolds. However, such matchings are known to be NP-hard to compute on 3-manifolds. Here, we refine the study of the complexity of problems related to discrete Morse theory in terms of parameterized complexity. On the one hand we prove that the erasability problem is W[P]-complete on the natural parameter. On the other hand we propose an algorithm for computing optimal Morse matchings on triangulations of 3-manifolds which is fixed-parameter tractable in the treewidth of the dual graph of the triangulation as well as the bipartite graph representing the adjacency of the 1- and 2-simplexes. This is joint work with Ben Burton, Thomas Lewiner and Joao Paixao.
It is our view, that the lack of a proper definition of a subsystem hinders all approaches to studying complex behavior of dynamical systems.
We attempt to develop a language/calculus of subsystems, that could possibly be used to introduce rigorous notions of complexity, replication, self-organization and stability, and to formulate and prove statements involving above notions.
I will explain how we use information-theoretic approach to define temporal dimension of a subsystem and I will prove some simple theorems.
This will be an exploratory talk and I hope for a (positive, of course) feedback.
Emergent gravity is based on a novel form of the equivalence principle known as the Darboux theorem or the Moser lemma in symplectic geometry stating that the electromagnetic force can always be eliminated by a local coordinate transformation as fas as spacetime admits a symplectic structure, namely, a microscopic spacetime becomes noncommutative (NC). If gravity is emergent from U(1) gauge theory on NC spacetime, this picture of emergent gravity suggests a completely new quantization scheme where quantum gravity is defined by quantizing spacetime itself, leading to a dynamical NC spacetime, which is radically different from the conventional approach trying to quantize a phase space of metric fields. This approach for quantum gravity allows a background independent formulation where spacetime as well as matter fields is equally emergent from a universal vacuum of quantum gravity.
As people live longer, the question arises of how malleable aging is and whether it can be slowed or postponed. The classic evolutionary theories of aging provide the theoretical framework that has guided aging research for 60 years. Are the theories consistent with recent evidence?
For an extended abstract see http://www.sciencemag.org/content/338/6107/618.full.pdf
A Restricted Boltzmann Machine (RBM) is a stochastic network with undirected bipartite interactions between a set of observed variables and a set of hidden variables. We are interested in the geometry of the set of all possible marginal probability distributions of the visible variables. In the first lecture I give a comprehensible survey on mathematical aspects of standard RBMs with binary variables.
In the second lecture I discuss RBMs with discrete, finite valued variables. The multivalued RBM hierarchy of models embraces all mixture models of discrete product distributions as well as the standard binary RBMs, and provides a natural view of semi-restricted Boltzmann machines.
From the perspective of algebraic geometry, these models are Hadamard products of secant varieties of Segre embeddings of products of projective spaces. I present a geometric picture of the 'inference functions' and 'distributed mixtures-of-products' represented by multivalued RBMs, and show results on universal approximation, approximation errors, tropicalization and dimension. Hereby extending previous work on the binary case.
A Restricted Boltzmann Machine (RBM) is a stochastic network with undirected bipartite interactions between a set of observed variables and a set of hidden variables. We are interested in the geometry of the set of all possible marginal probability distributions of the visible variables. In the first lecture I give a comprehensible survey on mathematical aspects of standard RBMs with binary variables.
In the second lecture I discuss RBMs with discrete, finite valued variables. The multivalued RBM hierarchy of models embraces all mixture models of discrete product distributions as well as the standard binary RBMs, and provides a natural view of semi-restricted Boltzmann machines.
From the perspective of algebraic geometry, these models are Hadamard products of secant varieties of Segre embeddings of products of projective spaces. I present a geometric picture of the 'inference functions' and 'distributed mixtures-of-products' represented by multivalued RBMs, and show results on universal approximation, approximation errors, tropicalization and dimension. Hereby extending previous work on the binary case.
We study a chemotaxis system on a bounded domain in two dimensions where the formation of the chemical potential is subject to Dirichlet boundary conditions. For such a system the solution is kept bounded near the boundary and hence the blowup set is composed of a finite number of interior points. If the initial total mass is 8 $\pi$ and the domain is close to a disc then the solution exhibits a collapse in infinite time of which movement is subject to a gradient flow associated with the Robin function.
We introduce a notion of curvature on general planar graphs to the study the geometrical and spectral consequences of upper curvature bounds: Assuming non-positive curvature already has strong implication on the local and global structure of the graph that is the graph is locally tessellating, infinite and every point has empty cut locus. Moreover, negative curvature yields empty minimal bigons, positive isoperimetric constants and positive exponential growth, where explicit upper bounds yield explicit estimates. For the spectrum this directly implies estimates on the bottom of the (essential) spectrum, where the case of uniformly decreasing curvature characterizes discrete spectrum and certain eigenvalue asymptotics. Finally, non positive curvature ensures that there are no eigenfunctions of compact support.
We modify the definition of Ricci curvature of Ollivier of Markov chains on graphs to study the properties of the Ricci curvature of general graphs, Cartesian product of graphs, random graphs. We also classify the Ricci-flat graphs with girth bigger than 5.
This is a work in progress. We introduce a weak form of weak Omori-Yau maximum principle for strongly local Dirichlet spaces. It gives a reduction of the problem of stochastic completeness of weighted graphs to that of metric graphs. We then obtain a purely analytic proof of the volume growth criterion of Folz.
Large deviation theory deals with the decay of the probability of increasingly unlikely events. It is one of the key techniques of modern probability, a role which is emphasised by the award of the 2007 Abel prize to S.R.S. Varadhan, one of the pioneers of the subject. Large deviation theory is a part of probability theory that deals with the description of events where a sum of random variables deviates from its mean by more than a 'normal' amount, i.e., beyond what is described by the central limit theorem. The mini-course will give an overview and glimpse of new techniques for large deviations for a class of stochastic processes.
The course will rely on the recent book by Feng & Kurtz on Large Deviations for Stochastic Processes, and one of our aims is to elaborate on some of the key ideas and to provide an overview. In the first part we will review basic large deviations techniques adapted for stochastic processes. Beginning with the work of Cramér and including the fundamental work on large deviations for stochastic processes by Freidlin and Wentzell and Donsker and Varadhan, much of the analysis has been based on change of measure techniques. However, recently another methodology for large deviations analogous to the Prohorov compactness approach to weak convergence of probability measures has been developed. The main theme of the course and the book by Feng & Kurtz is the development of this approach to large deviation theory as it applies to sequences of cadlag stochastic processes. This approach involves verification of exponential tightness and unique characterisation of the possible rate function. We conclude henceforth our first part with results on exponential tightness and give Puhalskii's analogue of the Prohorov compactness theorem. In the second part, we focus on Markov processes and give large deviation results based on the convergence of the corresponding Fleming semigroups. The success of this approach depends heavily on the idea of a viscosity solution of a nonlinear equation. We outline how the comparison principle for Markov processes can be replaced by viscosity semigroup convergence. Having established the existence of a large deviation principle one needs to find appropriate representations of the rate function via stochastic control theory. We will provide a brief account on how effective dual representations of the rate functionals can be obtained. In the last part we shall demonstrate the effectiveness of these methods via examples of large deviation results for Markov processes including the Donsker-Varadhan theory for occupation measures and weakly interacting stochastic particles and deviations from hydrodynamic limits.
Jin Feng and Thomas G. Kurtz, Large Deviations for Stochastic Processes, Mathematical Surveys and Monographs, vol. 131, AMS 2006
A family of quantum states can be seen as carrying some information. If a quantum operation is applied, then some information can be lost. However, there are situations when this does not happen and the original family of states can be recovered, we then say that the operation is reversible with respect to the family. Characterization of such situations is an important question in quantum statistics and error correction. We give a list of equivalent reversibility conditions, in the finite dimensional situation. These are given by preservation of certain distinguishability measures on states: a class of f-divergences, L_1 distance and the Chernoff and Hoeffding distances. Another type of conditions is given in terms of a quantum Radon-Nikodym derivative and a factorization condition on the states. Reversibility is also characterized by preservation of a large class of quantum Fisher metrics.
The talk is divided into two parts: at first, we explain a general, Banach space valued, ergodic theorem for an abstract class of functions which are defined on the space of finite subsets of some countable, amenable group. Secondly, we apply the latter result to show the uniform approximation of the integrated density of states for random Schrödinger operators on metric, amenable Cayley graphs.
We present spectral estimates for non-positive Gaussian curvature on tessellating planar graphs. Moreover, we can characterize discreteness of spectrum by uniformly unbounded curvature. These estimates work via an isoperimetric inequality and can be extended to general planar graphs of negative curvature.
Usually, quantum theory (QT) is introduced by giving a list of abstract mathematical postulates, including the Hilbert space formalism and the Born rule. Even though the result is mathematically sound and in perfect agreement with experiment, there remains the question why this formalism is a natural choice, and how QT could possibly be modified in a consistent way.
My talk is on recent work with Lluis Masanes, where we show that five simple operational axioms actually determine the formalism of QT uniquely. This is based to a large extent on Lucien Hardy's seminal work. We start with the framework of "general probabilistic theories", a simple, minimal mathematical description for outcome probabilities of measurements. Then, we use group theory and convex geometry to show that the state space of a bit must be a 3D (Bloch) ball, finally recovering the Hilbert space formalism. There will also be some speculation on how to find natural post-quantum theories by dropping one of the axioms.
http://arxiv.org/abs/1004.1483
I explain the construction and basic properties of the Gromov-Prohorov metric on the space of metric measure spaces. The generated topology can be used, for example, to prove convergence of Galton-Watson trees (modelling genealogical trees) to the continuum random tree in the infinite population limit.
Ranking with pairwise comparisons arise in decision making and internet applications. Here we study the connections between three popular methods: HodgeRank, Principal Eigenvector (PEV), and Tropical Eigenvector. We show that their comparison can be embedded into one single optimization problem. As an example of this approach, we show that under iid noise, HodgeRank has asymptotically better probability of rank recovery than PEV. However, comparison remains open under the non-asymptotic regime, and this is one of the open problems in this area.
Casual evidence suggests that many strategic and non-strategic decisions are at least partially guided by analogy condierations relating technically different decisions to seemingly similar ones. However, such considerations often cannot be accounted for in common game theoretic equilibrium models. In the talk, I present two recent equilibrium models which try to capture such aspects -- valution equilibrium (Jehiel and Samet, 2007) and alf-Nash-equilibrium (Wichardt, 2010) -- and discuss both strong points and problems of these approaches. Moreover, I indicate potential ways to apply the respective ideas to the modelling of communication.
Usually, quantum theory (QT) is introduced by giving a list of abstract mathematical postulates, including the Hilbert space formalism and the Born rule. Even though the result is mathematically sound and in perfect agreement with experiment, there remains the question why this formalism is a natural choice, and how QT could possibly be modified in a consistent way.
My talk is on recent work with Lluis Masanes, where we show that five simple operational axioms actually determine the formalism of QT uniquely. This is based to a large extent on Lucien Hardy's seminal work. We start with the framework of "general probabilistic theories", a simple, minimal mathematical description for outcome probabilities of measurements. Then, we use group theory and convex geometry to show that the state space of a bit must be a 3D (Bloch) ball, finally recovering the Hilbert space formalism. There will also be some speculation on how to find natural post-quantum theories by dropping one of the axioms.
Conditional Independence(CI) models allow to represent sophisticated relations between random variables. Understanding their geometrical poperties gives insights into the boundary structure, i.e. distributions with limited support.
In this talk we show how to decompose complicated CI-models into simpler ones using an algebraic technique called the toric fiber product. We also discuss differences between the classical CI-inference, using the 'axioms' of conditional independence, and algebraic CI-inference using containment of ideals in polynomial rings.
The overdamped motion of a Brownian particle in a potential landscape with multiple wells is known to exhibit metastability, i.e., for weak noise, transitions between potential minima occur on exponentially long time scales. The small-noise asymptotics of transition times is usually described by the Eyring-Kramers formula. However, this formula breaks down in the presence of degenerate (non-quadratic) saddles or wells. We will show how recent results by Bovier, Eckhoff, Gayrard and Klein, providing the first rigorous proof of the Eyring-Kramers formula, can be extended to degenerate landscapes.
This is joint work with Nils Berglund (Orléans).
The talk consists of two parts. In a first part, I give a brief elementary introduction to the "concentration of measure" phenomenon, a topic on the boundary between high-dimensional geometry and probability theory. This part also contains an overview on applications in quantum information theory and statistical mechanics. In the second part, I report on work in progress with David Gross and Jens Eisert. We investigate the properties of random pure quantum states under energy constraints, and show that with overwhelmingly large probability, they look in some sense like Gibbs states. Technically, we prove a measure concentration result on energy submanifolds.
Asymptotic hypothesis testing in its simplest form is about discriminating two states of a lattice system, based on measurements on finite blocks that asymptotically cover the whole lattice. In general, it is not possible to discriminate the local states with certainty, and one's aim is to minimize the probability of error, subject to certain constraints. Hypothesis testing results show that, in various settings, the error probabilities vanish with an exponential speed, and the decay rates coincide with certain relative-entropy like quantities. Apart from giving computable closed expressions for the error exponents, the importance of these results lies in providing an operational interpretation for the given relative entropy-like quantities. Here we present such identities in the settings of Stein's lemma and the Chernoff and the Hoeffdings bounds for various classes of correlated states on cubic lattices.
Musical ensemble performance showcases the human capacity for temporally precise yet flexible interpersonal coordination. Ensemble musicians maintain synchrony in the face of large-scale tempo changes, expressively motivated deviations in local tempo, and random variability in timing. Despite these irregularities, a seemingly infinite number of isochronous and non-isochronous temporal structures can be synchronized, either in rhythmic unison or via the interlocking of complementary rhythms. The neurocognitive mechanisms that mediate these varieties of musical synchronization are specialized for entrainment, adaptive timing, and anticipatory action control. I will present research into these mechanisms, drawing on studies of sensorimotor synchronization (finger tapping) with adaptive metronomes and human partners, as well as a study of coordination in piano duos.
The emergence of novelty is a ubiquitous feature in science, technology, and economic life. It is the crucial input to the growth of human knowledge. At the same time, novelty is one of the most amorphous concepts in scientific thought. Both theorizing about, and decision making in face of, novelty face notorious problems. This paper tries to shed some light on the question of why this is so and explores the possibilities for dealing in a more systematic fashion with the novelty. The notion of degrees of novelty is introduced and compared to that of a probability measure. The results of the inquiry are summarized in some propositions.
In recent years there has been a growing interest in recreating experimentally the conditions for the emergence of language and in particular of (proto)grammatical structures (Galantucci 2005; Garrod et al. 2007; Selten & Warglien 2007; Kirby et al. 2008). I this talk I summarize recent results in this research area and discuss more in detail my experimental work with R. Selten (PNAS 2007) and its implications, in particular concerning language compositionality and its adaptive value. I also shortly discuss a new experimental strategy that promises to deliver laboratory languages which are ecologically more valid and closer to natural language.
The information divergence of a probability measure P from a hierarchical log-linear model is maximized over all P. A new upper bound on the maximum will be presented. For the models given by the bases of a matroid, the tightness of the bound will be analyzed and related to matroid representations by partitions. Connections to ML estimation from bad data and to secret sharing schemes will be discussed.
In a broad sense, the branch of statistical physics called "critical phenomena" deals with scale free systems. In the atmospheric sciences observations of apparently scale-free systems abound: area-perimeter relations in clouds display a robust fractal dimension of 4/3, rain event-size distributions are non-trivially scale-free over 5 orders of magnitude, the variance of precipitation rate obeys finite-size scaling as measurement resolution is changed. From the standpoint of statistical physics I will point out that most of these observations can be related to geometric or dynamical phase transitions. Methodological issues related to causal networks will be discussed.
Semigraphoids are combinatorial structures that arise in statistical learning theory. They are equivalent to convex rank tests, and to Minkowski summands of the permutohedron, a convex polytope whose vertices are labeled by the elements of the symmetric group. This lecture gives an introduction to this theory, and its application to the design of a new non-parametric test for finding periodically expressed genes from time course microarray experiments.
This talk focuses on polynomial dynamical systems over finite fields. These systems appear in a variety of contexts in computer science, engineering, and computational biology, for instance as models of intracellular biochemical networks. It is shown that several problems relating to their structure and dynamics, as well as their control, can be formulated and solved in the language of algebraic geometry.
Through the observation of tunnelling splittings molecular spectroscopy directly accesses the dynamics of Hydrogen exchange reactions on a molecular level. The interpretation of experimental observations relies on appropriate atomistic models to describe phenomena such as adiabaticity and the resulting "non-statistical" kinetics. The talk focusses on one particular class of models, the reaction path hamiltonian, and the limitations of current formulations in the context of multidimensional tunnelling problems.
There is a wide range of traditional as well as an increasing number of modern applications involving multi-dimensional data/operators described by higher-order tensors, which are, in fact, the higher-order analogues of vectors and matrices. Naive numerical tensor calculus suffers from the so-called "curse of dimensionality" (Bellman). The modern concept to treat numerically higher-order tensors is based on their structured low-rank decomposition via the canonical (rank-$1$) tensors. There are numerous successful applications of such methods in chemometrics, higher-order statistics, data mining, mathematical biology, physics, etc.. We focus on the recent applications of tensor-product approximation to multi-dimensional operators arising in many-particle modeling and in standard FEM/BEM (say, discretisation of the electron- and molecular density functions, the classical potentials in $\mathbb{R}^d$, convolution and other integral transforms, elliptic Green's functions).In particular, we discuss the following issues:-- Why the orthogonal Tucker/canonical models are efficient (approximation theory); -- Methods of numerical multi-linear algebra; -- How to represent nonlocal operators arising from the Hartree-Fock, Kohn-Sham, Ornstein-Zernike and the Boltzmann equations in the data-sparse tensor-product form;-- Numerical examples in the electron- and molecular structure calculations; -- Future perspectives.
The topic of the talk is the concept of mutliinformation, which is one of information-theoretical characteristics of the degree of stochastic dependence among (finitely many) random variables. In the first part of the talk, a result about the non-existence of an axiomatic characterization of conditional indpendence models is recalled as an example of the theoretical use of the multiinformation function. In the second part, the multiinformation is related to conditional mutual information, which complies with several reasonable requirements on a measure of the degree of stochastic conditional dependence. The third part of the talk responds to an interesting question whether it is possible to decompose multiinformation into level-specific measures of dependence among variables.
The goal of this elementary talk will be to inform general statistical audience briefly and superficially about graphical modelling of conditional independence structures by means so-called chain graphs. This will be done by presenting three artificial toy examples whose aim is to motivate and illustrate the possible use of graphical methods in the following three areas: in contingency tables processing, in multivariate statistical analysis and in probabilitic decision making. The relevant mathematical definitions will be given throughout the talk.
Conventional noncooperative game theory hypothesizes that the joint strategy of a set of reasoning players in a game will necessarily satisfy an "equilibrium concept". All other joint strategies are considered impossible. Under this hypothesis the only issue is what equilibrium concept is "correct".
This hypothesis violates the first-principles arguments underlying probability theory. Indeed, probability theory renders moot the controversy over what equilibrium concept is correct - every joint strategy can arise with non-zero probability. Rather than a first-principles derivation of an equilibrium concept, game theory requires a first-principles derivation of a distribution over joint (mixed) strategies. If you wish to distill such a distribution down to the prediction of a single joint strategy, that prediction should be set by decision theory, using your (!) loss function. Accordingly, for any fixed game, the predicted joint strategy - one's "equilibrium concept" - will vary with the loss function of the external scientist making the prediction. Game theory based on such considerations is called Predictive Game Theory (PGT).
This talk shows how information theory can provide such a distribution over joint strategies. The connection of this distribution to the quantal response equilibrium is elaborated. It is also shown that in many games, having a probability distribution with support restricted to Nash equilibria - as stipulated by conventional game theory - is impossible.
PGT is also used to: i) Derive an information-theoretic quantification of the degree of rationality; ii) Derive bounded rationality as a cost of computation; iii) Elaborate the close formal relationship between game theory and statistical physics; iv) Use this relationship to extend game theory to allow stochastically varying numbers of players.
At least since Hume, people have wondered whether there are first principles limitations on what humans can deduce / infer about reality. Godel was perhaps the first to produce a formal mathematical result on this theme, concentrating on the topic of what can be deduced.
More recent work has returned to the topic that originally interested Hume, inductive inference. That work establishes that the expected generalization accuracy of any given inference algorithm is given by an inner product between two quantities. The first of those quantities is (a particular representation of) the inference algorithm. The second quantity is the distribution of inference problems likely to be encountered in reality.
One ramification of this inner product formula is the "No Free Lunch" (NFL) theorems. These state that any two inference algorithms are equivalent, when their performance is averaged across all possible inference problems. On the other hand, in addition to the NFL theorems, there are other "Free Lunch" theorems, establishing assumption-free superiorities of some inference algorithms over others. So the mathematics tells us that while in some senses Hume was correct, he didn't see the full picture.
More recently still it has been realized that black-box optimization is formally analogous to inductive inference. In particular, it is now known that an inner product between one's optimization algorithm and the distribution of optimization problems likely to be encountered determines expected optimization performance. This again results in NFL theorems. Here those theorems state that any two optimization algorithms are equivalent when their performance is averaged across all possible optimization problems. Some have argued that when translated into biology this result has implications for intelligent design.
The most recent work on this topic has considered a framework covering many optimization scenarios in addition to black-box optimization. In particular this framework also covers multi-armed bandit problems, and the evolution of multiple co-evolving players. As a particular instance of the latter type of optimization problem, the framework covers "self-play" problems. In these problems one has a set of multiple players. The set of players work together, to produce a champion. That champion then engages one or more antagonists in a subsequent multi-player game. The optimization problem is how the players should work together to produce a best possible champion.
In contrast to the traditional optimization case where the NFL results hold, in self-play there are free lunches: in coevolution some algorithms produce better champions than other algorithms, averaged across all possible problems. However in the typical coevolutionary scenarios encountered in biology, there is no champion. There the NFL theorems still hold.
We characterize the spatial spreading of the coarsening process in the Allen-Cahn equation in terms of the propagation of a nonlinear modulated front. Unstable periodic patterns of the Allen-Cahn equation are invaded by a front, propagating in an oscillatory fashion, and leaving behind the homogeneous, stable equilibrium. During one cycle of the oscillatory propagation, two layers of the periodic pattern are annihilated. Galerkin approximations and Conley index for ill-posed spatial dynamics are used to show existence of modulated fronts for all parameter values. In the limit of small amplitude patterns or large wave speeds, we establish uniqueness and asymptotic stability of the modulated fronts. We show that the minimal speed of propagation can be characterized by a dichotomy depending on the existence of pulled fronts. Main tools here are an Evans function type construction for the infinite-dimensional ill-posed dynamics and an analysis of the complex dispersion relation based on Sturm-Liouville theory.
We want to study invariant probability measures on a $Z^d$ Shiftspace which minimize the integral of a real valued (energy-)function $\psi$. We will employ an approach developed by Gerhard Keller, which crucially uses minimizing configurations. Keller's work will be generalized by developing a spezialized function space of $\psi$s which admit to be treated with Keller's approach.
Discontinuous Galerkin (DG) discretizations for the three parts of the Navier-Stokes equations are discussed. First, we show the very natural derivation of the DG method by LeSaint and Raviart for advection problems. Then, the LDG method for the Laplacian and the Stokes operator is presented. It is combined with the method for advection to obtain a stable discretization of the Oseen equations. Finally, this method is applied inside a nonlinear iterative scheme to obtain strongly divergence free solutions of the incompressible Navier-Stokes equations. In the second part, the construction of efficient solvers for the discrete problems is discussed.
The talk gives a short overview of the visualization of graphs and networks. We start with the general notion of visualization and methods for graph drawing including interaction. Special attention is given to animations of time-dependent graphs. The visualization of network topology can also be treated by showing incidence matrices, so we have a look into this area. Finally, we describe the use of dynamical system theory in visualization.
Following Abrams and Strogatz [1] and Patriarca and Leppanen [2], five other physics groups independently started to simulate the competition of languages, as opposed to the evolution of a human language out of ape sounds, or the learning of a language by a child [3]. This talk concentrates on the models of Christian Schulze et al [4] and of Viviane de Oliveira et al [5] which allow the simulation of a large number of languages, similar to today's 8,000 human languages. The model of ref.4 deals with a continuous process of random mutations of a language, transfer from and to other languages, and flight away from languages spoken by only a few people. The model of ref.5 combines these flight and mutation aspects, ignores transfer and describes the colonization of a large geographical region by people starting at one lattice point. The size of a language is defined by the number of people speaking it. The first model [4] gives a realistic log-normal shape for the histogram of language sizes but the numbers are bad. For the second model [5] our Monte Carlo simulations give sizes up to thousand million, but not the nearly log-normal shape. A meeting is planned for mid-September 2006 in Poland[1] D.M. Abrams and S.H. Strogatz, Nature 424, 900 (2003). [2] M. Patriarca and T. Leppänen, Physica A 338, 296 (2004). [3] D. Stauffer, S. Moss de Oliveira, P.M.C. de Oliveira and J.S. Sa Martins, Biology, Sociology, Geology by Computational Physicists, Elsevier, Amst. 2006 [4] C. Schulze and D. Stauffer, Int. J. Mod. Phys. C 16, 781 (2005); Physics of Life Reviews 2, 89 (2005); AIP Conference Proceedings 779, 49 (2005). [5] V.M. de Oliveira, M.A.F. Gomes and I.R. Tsang, Physica A 2006 (two papers).
During blood vessel formation, endothelial cells respond not only to chemical gradients (chemotaxis), but also to the strain of the extracellular matrix (ECM) and, possibly, to ECM mechanical properties. The relative importance on the different factors during formation of vascular networks under experimental conditions is still controversial. Patterns formed in culture conditions have been explained both by modeling the mechanical interaction between cells and the ECM, and by modeling potential chemical interactions. I shall discuss the assumptions and results behind the different mathematical models.
Despite broad interest in self-organizing systems, there are few quantitative criteria for self-organization which can be applied to dynamical models, let alone experimental data. The existing criteria all give counter-intuitive results in important cases. A resolution is offered by a recently-proposed criterion, namely an internally-generated increase in the statistical complexity, the amount of information required for optimal prediction of the system's dynamics. This complexity can be precisely defined for spatially-extended dynamical systems, using the probabilistic ideas of mutual information and minimal sufficient statistics. The definition also leads to a general method for predicting such systems. Examining the variation in the statistical complexity over space and time provides a way of automatically identifying the coherent structures generated by the system. Results on two important classes of cellular automata (CA) --- cyclic CA, which model excitable media, and sandpile CA, which are prototypes of self-organized criticality --- illustrate the general ideas.
Tumour cells taken from human patient tissues and grown in the laboratory (in vitro) over many years are termed human tumour cell lines. These cell lines are the first port of call when investigating different cancer therapies (i.e chemotherapy and radiation). In general most therapies are described in terms of their effects within the cell cycle. Often they induce cell cycle arrest and subsequent cell death. Any general mathematical model must incorporate all these features and should be able to output experimental data.
In this presentation I will discuss the development and results of a general model of a cell line both unperturbed and perturbed by a number of cancer therapies.
Endothelial cells (EC), which form the lining of all blood vessels, represent an important signalling unit capable of responding to blood-borne signalling molecules and fluid shear stress. Signal transduction via intracellular messengers such as calcium and IP3 plays a role in regulation of vascular tone and endothelial permeability, factors which affect the likelihood of atherosclerotic plaque formation and heart disease. A mathematical model of the intracellular signalling process, consisting of a number of coupled ordinary differential equations, is presented. The typical response of an EC to an external stimulus is a transient increase in the intracellular free calcium concentration, followed by decay to a plateau level, which persists until the stimulus is removed. However, under certain circumstances, the model exhibits sustained calcium and IP3 oscillations for the duration of the stimulus. The model is coupled to the fluid dynamics in the blood vessel, which affect the concentrations of circulating agonists and the wall shear stress. In this way, the effects of different haemodynamic conditions on endothelial signalling may be examined, improving our understanding of the relationship between vessel geometry and atherosclerosis.
Modern imaging techniques have allowed us to describe human cancer in three dimensions, but we are still a long way from being able to provide a dynamic description of the growth and response to anti-cancer treatment of human cancer tissue. A number of features of cancer growth suggest that mathematical modelling may allow new insights into this problem. The proliferation of individual cancer cells is not exponential but reflects a balance between cell division and cell death. Moreover the transitions between various growth phases of cancer cells, as well as the transitions to cell death, exhibit some of the characteristics of stochastic behaviour. This talk will describe the results of dynamic studies of growing cancer cells, both before and after treatment with cytotoxic anticancer drugs, and the challenges that these results pose for mathematical modelling of human cancer.
Contributions of mathematics to biology can be of very different kinds: supportive, as in data analysis (statistics, approximations); challenging, by suggesting quantitative models that can be falsified; inspiring, by suggesting novel qualitative features that remain to be discovered in "real life". I will discuss this topic from the subjective viewpoint of a biologist, using examples from developmental biology and morphogenesis and their interpretation by topology.
Given a compact subset C of R^n and a correspondence
F : C ---> R^n
(i.e. a subset of C x R^n), which conditions may insure the existence of an "infinite orbit". i.e. an infinite sequence x_0, x_1, ... with (x_i,x_{i+1}) in F for all i?
If F is "continuous", one can find a compact non-empty subset D of C with "F(D)=D". However, when can this be improved to yield the existence of an infinite orbit?
Let $L$ be a positive holomorphic line bundle on a compact Kähler manifold $X$. The Bergman kernel is $B_p(x)=\sum_i |s_i|^2$ with $s_i$ an orthonormal basis of $H^0(X,L^p)$. Theorem (Zelditch) : There exist soom\tm function $b_j$ on $X$ such that as $p\to \infty$, $B_p(x)$ has the asymptotic expansion $\sum_{j=0}^\infty b_j(x) p^{n-j}$. It plays a very important role in Donaldson's work on the relation of constant scalar curvature and the balance condition for the projective embedding. In this talk, I will explain how to get Zelditch's Theorem from the asymptotic expansion of the heat kernel, and its generalization to the symplectic case.
Phylogenetic trees appears in attempts to describe evolution of living beings. In the talk we'll discuss the modern approaches to constructing phylogenetic trees.
We will show an abstract setup that gives another approach to the stochastic Loewner evolution that have been introduced by Oded Schramm, and is closely related to possible scaling limits of planar self-avoiding walks (we will briefly review some basic facts and open questions on self-avoiding walks) as well as other models and to conformal field theory.
More precisely, we will be studying the class of random subsets K of the upper half-plane H, such that the law of K conditioned on K being a subset of A, is exactly the law of f(K) where f is a conformal map from H onto A (if it exists) with f(0)=0 and f(infinity)=infinity. We will in particular see that there exists a unique random simple curve from 0 to infinity satisfying this conformal restriction property.
This is based on joint work with Greg Lawler and Oded Schramm
There is a large body of work on the art of computing (specific, often very large) numerical determinants, starting with Gauss(ian elimination). It is much slower to compute specific symbolic determinants, whose entries are expressions in several variables rather than numbers. But in this talk I will describe how, sometimes, one can use computer algebra to automatically discover, and rigorously prove, general, closed-form formulas (as an expression in the dimension n) for determinants of infinite families of matrices (a_{i,j}(n))(1
Zero-curvature representations (ZCR) with values in a non-solvable Lie algebra and their 1-parametric families are important attributes of completely integrable nonlinear systems of PDE's, especially in dimension two.
While conservation laws and horizontal cohomology in general are computable via the so called C-spectral sequence (A.M. Vinogradov, 1978), for ZCR's the corresponding cohomology (gauge cohomology) was introduced only in 1993. Gauge cohomology is related to ZCR's in much the same way as horizontal cohomology is related to conservation laws (the latter reduces to the former when the Lie algebra is Abelian).
In particular, a procedure of computing zero-curvature representations arises, which appears to be, unlike the now classical Wahlquist--Estabrook procedure, applicable to classification problems even in absence of limits on the nature of ZCR's other than irreducibility. Moreover, horizontal gauge cohomology provides obstructions to removability of the spectral parameter of the ZCR.