It has been a quarter century now that Banach spaces of anisotropic distributions have been introduced to study statistical properties of chaotic dynamical systems via Ruelle transfer operators. This approach gives both new proofs of classical results and new results. The first successes were obtained for smooth hyperbolic dynamics. However, some natural dynamical systems, such as dispersive (Sinai) billiards are not smooth. The singularities cause challenging technical difficulties. We shall survey new results on dispersive (discrete or continuous time) dispersive billiards obtained in the past five years using anistotropic Banach spaces, ending with a very recent construction of the measure of maximal entropy for billiard flows satisfying a condition of sparse recurrence to singularities.
(In this joint work with Carrand and Demers, we obtain Bernoullicity, but no control of the speed of mixing.)
Starting from the old Greeks I will try to explain some basic ideas behind resolutions of singularities and Springer theory using rings of invariants. If time allows I will indicate how this connects to categorified knot invariants.
A guiding problem in symplectic geometry is the "Lagrangian intersection problem". This problem asks about the number of intersection points between certain smooth Lagrangian submanifolds in a symplectic manifold. It was originally promoted by V.Arnold, who was motivated by considerations from classical physics. While the original version of the Lagrangian intersection problem is now rather well-understood, I will discuss recent work with Shaoyun Bai which initiates the study of the Lagrangian intersection problem for certain singular Lagrangian subsets (called "skeleta") which are important in symplectic geometry. Classical tools do not work in this context. Instead, we introduce new methods which are motivated by "quantum" geometry and homological mirror symmetry.
This talk will discuss recent joint work with Matthew Kwan, Ashwin Sah, and Mehtaab Sawhney, proving an old conjecture of Erdös and McKay (for which Erdős offered $100). This conjecture concerns Ramsey graphs, which are (roughly speaking) graphs without large complete or empty subgraphs. In order to prove the conjecture, we study edge-statistics in Ramsey graphs, i.e. we study the distribution of the number of edges in a random vertex subset of a Ramsey graph. After discussing some background on Ramsey graphs, the talk will explain our results and give an overview of our proof approach.
Already Jesse Douglas was aware of the fact that minimizing sequences for Dirichlet's integral of annulus-type surfaces spanning two parallel, co-axial planar circles might degenerate into a pair of discs. The characterization of "bubbling" of (approximate) harmonic maps from a closed surface to a closed Riemannian target manifold allowed Sacks-Uhlenbeck to conclude the existence of harmonic representatives for every homotopy class of maps in the case of target manifolds whose second fundamental group was trivial. Chang-Yang were able to give sufficient conditions for solving Nirenberg's problem for conformal metrics of prescribed Gauss curvature on the 2-sphere by studying the contribution of degenerate conformal metrics to the topological degree of the associated variational problem.Inspite of these achievements, there still are many open questions related to the possible topological degeneration of comparison maps or "bubbling" in geometric variational problems, and we will discuss some of these questions.
Representation theory and quantum (enumerative) geometry are two areas of mathematics with physics origins. A new field is emerging at their intersection. I will describe two of its applications, to old problems in integrable lattice models and knot theory.
The lecture explains joint work with Simon Brendle on the deformation of hypersurfaces in Riemannian manifolds by a fully non-linear, parabolic geometric evolution system. The surfaces are assumed to satisfy a natural curvature condition ("2-convexity") that is weaker than convexity and move with a speed given by a non-linear mean value of their principal curvatures. It is explained how the possible singularities of the flow can be classified and overcome by surgery to construct a long-term solution of the flow that leads to the classification of all 2-convex surfaces in a natural class of Riemannian manifolds.
Felix Klein (1849-1925) is characterized by outstanding results in mathematics, its applications, and as a head of the reform of mathematical instruction. From early on, he was internationally oriented and supported mathematically gifted students regardless of their sex, religion, and nationality. This presentation will focus on Klein’s engagement as an impetus behind women studying mathematics. Klein cooperated with numerous foreign colleagues who also promoted women in mathematics. Among them were the geometrician Gaston Darboux (1842-1917) in France, Luigi Cremona (1830-1903) in Italy, Arthur Cayley (1821-1895) in the United Kingdom, Hieronymus G. Zeuthen (1839-1920) in Denmark, and James Joseph Sylvester (1814-1897). Since the 1890s, when he began to create a famous international centre of mathematics at the University of Göttingen, Klein not only allowed male mathematicians from abroad to attend his courses, but also women. David Hilbert (1862-1943) followed in Klein’s footsteps.
The present contribution examines the beginning of women’s mathematical study at German universities and analyses the special efforts of Klein and Hilbert. It will be shown that they had to fight for the right of women to study and to receive doctoral and post-doctoral degrees. The analysis is based on archival materials in Göttingen related to the careers of Klein and Hilbert, among other sources. In this context, I will also discuss factors that influenced women’s careers in mathematics and still have a lasting effect today.
The fundamental question in cognitive neuroscience—what are the key coding principles of the brain enabling human thinking—still remains largely unanswered. Evidence from neurophysiology suggests that place and grid cells in the hippocampal-entorhinal system provide an internal spatial map, the brain’s SatNav—the most intriguing neuronal coding scheme outside the sensory system. Our framework is concerned with the key idea that this navigation system in the brain—potentially as a result of evolution—provides the blueprint for a neural metric underlying human cognition. Specifically, we propose that the brain maps experience in so-called ‘cognitive spaces’. In this talk, I will give an overview of our theoretical framework and experimental approach and will present show-case examples from fMRI, MEG and virtual reality experiments identifying cognitive coding mechanisms in the hippocampal-entorhinal system and beyond. Finally, I will sketch out our long-term cognitive neuroscience research program at the MPI, including key translations to information technology and the clinic.
Further reading:Bellmund, J. L. S., Gärdenfors, P., Moser, E. I., & Doeller, C. F. (2018). Navigating cognition: Spatial codes for human thinking. Science, 362(6415), eaat6766. https://doi.org/10.1126/science.aat6766
Linear recurrence sequences (LRS), such as the Fibonacci numbers, permeate vast areas of mathematics and computer science. In this talk, we consider three natural decision problems for LRS over the integers, namely the Skolem Problem (does a given LRS have a zero?), the Positivity Problem (are all terms of a given LRS positive?), and the Ultimate Positivity Problem (are all but finitely many terms of a given LRS positive?). Such questions have applications in a wide array of scientific areas, ranging from theoretical biology and software verification to quantum computing and statistical physics. Perhaps surprisingly, the study of decision problems for linear recurrence sequences (and more generally linear dynamical systems) involves techniques from a variety of mathematical fields, including analytic and algebraic number theory, Diophantine geometry, and algebraic geometry. I will survey some of the known results as well as recent advances and open problems.
This is joint work with James Worrell
Despite the outstanding success of deep neural networks in real-world applications, most of the related research is empirically driven and a mathematical foundation is almost completely missing. One central task of a neural network is to approximate a function, which for instance encodes a classification task. In this talk, we will be concerned with the question, how well a function can be approximated by a neural network with sparse connectivity. Using methods from approximation theory and applied harmonic analysis, we will derive a fundamental lower bound on the sparsity of a neural network. By explicitly constructing neural networks based on certain representation systems, so-called $\alpha$-shearlets, we will then demonstrate that this lower bound can in fact be attained. Finally, we present numerical experiments, which surprisingly show that already the standard backpropagation algorithm generates deep neural networks obeying those optimal approximation rates. This is joint work with H. Bölcskei (ETH Zurich), P. Grohs (Uni Vienna), and P. Petersen (TU Berlin).
In his last letter to Hardy, four months before his early death in 1920, Ramanujan gave a list of 17 power series that he called "mock theta functions" and that he was sure would eventually become important in mathematics. An understanding of the properties of these functions and their generalizations ("mock modular forms") came only in 2002 with the thesis of Sander Zwegers, who showed that they have a weakened modular transformation property with an obstruction to true modularity that is given by an auxiliary function called the "shadow" and which is itself a modular form.
More recently it has transpired that these mock modular forms also appear naturally in physics, e.g. in the string theory of black holes. Even more recently they have also occurred in the discovery of new varieties of "Moonshine" (Mathieu moonshine, umbral moonshine,...) generalizing the famous Monstrous Moonshine of the 80s. We will give a survey of some of these developments.
A control system is a dynamical system on which one can act thanks to what is called the control. For example, in a car, one can turn the steering wheel, press the accelerator pedal etc. These are the control(s). One of the main problems in control theory is the controllability problem. It is the following one. One starts from a given situation and there is a given target. The controllability problem is to see if, by using some suitable controls depending on time, the given situation and target, one can move from the given situation to the target. We study this problem with a special emphasis on the case where the nonlinearities play a crucial role. In finite dimension in this case a key tool is the use of iterated Lie brackets as shown in particular by the Chow theorem. This key tool gives also important results for some control systems modeled by means of partial differential equations. However we do not know how to use it for many other control systems modeled by means partial differential equations. We present methods to avoid the use of iterated Lie brackets. We give applications of these methods to the control of various physical control systems (Euler and Navier-Stokes equations of incompressible fluids, 1-D hyperbolic systems, heat equations, shallow water equations, Korteweg-de Vries equations, Schroedinger equations...) and to the stabilization problem, another of the main problems in control theory.
Advances in computation, communication and embedded systems are enabling the deployment of cyberphysical systems of unprecedented complexity. This trend, which paves the way to technologies such as the Internet of Things, Industry 4.0, and the Industrial Internet, must be paralleled by new approaches in networked control, adapted to large-scale interconnections of subsystems that interact and exchange information.
In this talk we will address scalability of control design, focusing on methods where the complexity of synthesising a local controller is independent of the overall system size. Scalable control design is especially needed in industrial applications where the number of subsystems changes over time, sensors and actuators must be replaced with minimal human intervention, or no global model is available. We will present methods for the plug-and-play synthesis of local controllers, enabling the seamless addition and removal of subsystems while denying automatically plug in and out requests that are dangerous for safety or stability.
Then, we will describe the plug-and-play design of voltage controllers for islanded microgrids, which are prominent examples of cyberphysical systems. The goal is to allow the connection and disconnection of generation units and loads while preserving overall voltage stability. Simulations and experiments will be presented for illustrating the applicability of control synthesis procedures. This is a first step towards the deployment of multi-owner, autonomous energy islands with flexible size and topology.
The final part of the talk will be devoted to research perspectives towards enhanced adaptivity and autonomy of cyberphysical control systems.
In the last ten years the use of splines as a tool for the discretisation of partial differential equations has gained interests thanks to the advent of isogeometric analysis. For this class of methods, all robust and accurate techniques aiming at enhancing the flexibility of splines, while keeping their structure, are of paramount importance since the tensor product structure underlying spline constructions is far too restrictive in the context of approximation of partial differential equations (PDEs).
I will describe various approaches, from adaptivity with regular splines, to regular patch gluing and to trimming. Moreover, I will show applications and test benches involving large deformation problems with contact and quasi-incompressible materials.
Cancer can be viewed as an evolutionary process, where the accumulation of mutations in a cell eventually causes cancer. The cells in a tissue are not only organized spatially, but typically hierarchically. This affects the dynamics in these tissues and inhibits the accumulation of mutations. Mutations arising in primitive cells can lead to long lived or even persistent clones, but mutations arising in further differentiated cells are short lived and do not affect the organism. Both the spatial structure and the hierarchical structure can be modeled mathematically. The effect of spatial structure on evolutionary dynamics is non-trivial and depends on the precise implementation of the model. Hierarchical structure can delay or suppress the dynamics of cancer. While these models can lead to important conceptual insights, fitting these models directly to data remains challenging. However, closely related models have the remarkable property that they can make a prediction with data obtained from a single measurement. References:Werner et al., “Dynamics of Mutant Cells in Hierarchical Organized Tissues”, PLOS CB (2011)Hindersin & Traulsen, “Most Undirected Random Graphs Are Amplifiers of Selection for Birth-Death Dynamics, but Suppressors of Selection for Death-Birth Dynamics “ PLOS CB (2015)Werner, Beier et al., “Reconstructing the in vivo dynamics of hematopoietic stem cells from telomere length distributions”, eLife (2016)
It is well known since de Moivre and Laplace that the Gaussian law describes the fluctuations of large independent particle systems. In this talk, we shall discuss extensions to strongly coupled systems such as random matrices or random tiling.
Many problems in the physical sciences require the determination of an unknown field from a finite set of indirect measurements. Examples include oceanography, oil recovery,water resource management and weather forecasting. The Bayesian approach to these problems is natural for many reasons, including the under-determined and ill-posed nature of the inversion, the noise in the data and the uncertainty in the differential equation models used to describe complex mutiscale physics. In this talk I will describe the advantages of formulating Bayesian inversion on function space in order to solve these problems. I will overview theoretical results concerning well-posedness of the posterior distribution, approximation theorems for the posterior distribution, and specially constructed MCMC methods to explore the posterior distribution. Special attention will be paid to various prior (regularization) strategies, including Gaussian random fields, and various geometric parameterizations such as the level set approach to piecewise constant reconstruction.[1] M. Dashti, A.M. Stuart, "The Bayesian Approach To Inverse Problems". To appear in The Handbook of Uncertainty Quantification, Springer, 2016. http://arxiv.org/abs/1302.6989[2] S.L.Cotter, G.O.Roberts, A.M. Stuart and D. White, "MCMC methods for functions: modifying old algorithms to make them faster". Statistical Science, 28 (2013) 424-446. http://homepages.warwick.ac.uk/~masdr/JOURNALPUBS/stuart103.pdf[3] M.A. Iglesias, K. Lin, A.M. Stuart, "Well-Posed Bayesian Geometric Inverse Problems Arising in Subsurface Flow", Inverse Problems, 30 (2014) 114001. http://arxiv.org/abs/1401.5571[4] M.A. Iglesias, Y. Lu, A.M. Stuart, "A level-set approach to Bayesian geometric inverse problems", In Preparation, 2014.
Consider an unweighted k-nearest neighbor graph that has been built on a random sample from some unknown density p on Rd. Assume we are given nothing but the unweighted (!) adjacency matrix of the graph: we know who is among the k nearest neighbors of whom, but we do not know the point locations or any distances or similarity values between the points. Is it then possible to recover the original point configuration or estimate the underlying density p, just from the adjacency matrix of the unweighted graph? As I will show in the talk, the answer is yes. I present a proof for this result, and also discuss relations to the problem of ordinal embedding.
We will give an introduction to the methods of the book "Towards the mathematics of quantum field theory". One may sum-up these methods by saying that the geometry of the physicists "spaces of fields" can be studied either by a coordinate (i.e. algebraic or analytic) approach, or by a parametrized (i.e., geometric) approach. Both approaches are complementary, and the aim of categorical methods in geometry are to improve them by introducing new types of algebras and new types of spaces, in a way that is adapted to the study of both algebraic and geometric obstruction problems. Since such obstruction problems are widespread in quantum field theory, these new methods give a kind of "geometry of obstructions", that gives a way to understand obstruction theory in a geometric manner.
This talk reviews some of the phenomena and theoretical results on the long-time energy behaviour of continuous and discretized oscillatory systems that can be explained by modulated Fourier expansions: long-time preservation of total and oscillatory energies in oscillatory Hamiltonian systems and their numerical discretisations, near-conservation of energy and angular momentum of symmetric multistep methods for celestial mechanics, metastable energy strata in nonlinear wave equations, and long-time stability of plane wave solutions of nonlinear Schroedinger equations.
We describe what modulated Fourier expansions are and what they are good for. Much of the presented work was done in collaboration with Ernst Hairer. Some of the results on modulated Fourier expansions were obtained jointly with David Cohen and Ludwig Gauckler.
We discuss a family of ideas, algorithms, and results for learning from high-dimensional data. These methods rely on the idea that complex high-dimensional data has geometric structures, often low-dimensional, that, once discovered, assist in a variety of statistical learning tasks, as well as in tasks such as data visualization. We discuss various realizations of these ideas, from manifold learning and dimension reduction techniques to new techniques based on suitable multiscale geometric decompositions of the data. We will then discuss how these multiscale decompositions may be used to solve various tasks, from dictionary learning to classification, to the construction of probabilistic models for the data, to approximation of high-dimensional stochastic systems.
Dynamics in nature often proceed in the form of rare reactive events: The system under study spends very long periods of time at various metastable states and only very rarely transitions from one metastable state to another. Conformation changes of macromolecules, chemical reactions in solution, nucleation events during phase transitions, thermally induced magnetization reversal in micromagnets, etc. are just a few examples of such reactive events. One can often think of the dynamics of these systems as a navigation over a potential or free energy landscape, under the action of small amplitude noise. In the simplest situations the metastable states are then regions around the local minima on this landscape, and transition events between these regions are rare because the noise has to push the system over the barriers separating them. This is the picture underlying classical tools such as transition state theory or Kramers reaction rate theory, and it can be made mathematically precise within the framework of large deviation theory. In complex high dimensional systems, this picture can however be naive because entropic (i.e. volume) effects start to play an important role: Local features of the energy, such as the location of its minima or saddle points, may have much less of an impact on its dynamics than global features such as the width of low lying basins on the landscape: in these situations a more general framework for the description of metastability is required. In this talk, I will discuss tools that have been introduced to that effect, based e.g. on potential theory, and illustrate them on various examples, including the folding of toy models of proteins and the rearrangement of Lennard-Jones clusters.
It is now 25 years ago that Hara and Slade published their seminal work on the mean-field behavior of percolation in high-dimensions, showing that at criticality there is no percolation and identifying several percolation critical exponents. The main technique used is the lace expansion, a perturbation technique that allows us to compare percolation paths to random walks based on the idea that faraway pieces of percolation paths are almost independent in high dimensions.In the past few years, a number of novel results have appeared for high-dimensional percolation. I intend to highlight the following topics: The recent computer-assisted proof, with Robert Fitzner, that identifies the critical behavior of nearest-neighbor percolation above 14 dimensions using the so-called Non-Backtracking Lace Expansion (NoBLE). While these results are expected to hold above 6 dimensions, the previous and unpublished proof by Hara and Slade only applied above 18 dimensions; The identification of arm exponents in high-dimensional percolation in two works by Asaf Nachmias and Gady Kozma, using a clever and novel difference inequality argument, and its implications for the incipient infinite cluster and random walks on them; The finite-size scaling for percolation on a high-dimensional torus, where the largest connected components share many features to the Erdos-Renyi random graph. In particular, substantial progress has been made concerning percolation on the hypercube, where joint work with Asaf Nachmias avoids the lace expansion altogether.We assume no prior knowledge about percolation.
Measure concentration ideas developed during the last century in various parts of mathematics, including functional analysis, probability theory and statistical mechanics, areas typically dealing with models involving an infinite number of variables. After several early observations, the real birth of measure concentration took place in the early seventies with the new proof by V. Milman of Dvoretzky's theorem on spherical sections of convex bodies in high dimension. Since then, the concentration of measure phenomenon spred out to a wide range of illustrations and applications, and became a central tool and viewpoint in the quantitative analysis of asymptotics properties in numerous topics of interest including geometric analysis, probability theory, statistical mechanics, mathematical statistics and learning theory, random matrix theory, randomized algorithms, complexity etc. The talk will feature basics aspects and some of these illustrations.
Colloquium on the occasion of Wolfgang Hackbuschs retirement.In joint work with Simon Blatt and Melanie Rupflin we lay out a functional analytic framework for the Lane-Emden equation $-\Delta u = u|u|^{p-2}$ on an $n$-dimensional domain, $n\ge 3$, in the supercritical regime when $p>\frac{2n}{n-2}$ and study the associated gradient flow.
The soliton resolution conjecture for the focusing nonlinear Schrodinger equation (NLS) is the vaguely worded claim that a global solution of the NLS, for generic initial data, will eventually resolve into a radiation component that disperses like a linear solution, plus a localized component that behaves like a soliton or multi-soliton solution. Considered to be one of the fundamental problems in the area of nonlinear dispersive equations, this conjecture has eluded a proof or even a precise formulation till date. I will present a theorem that proves a "statistical version" of this conjecture at mass-subcritical nonlinearity. The proof involves a combination of techniques from large deviations, PDE, harmonic analysis and bare hands probability theory.
In this talk we discuss concentration inequalities that estimate deviations of functions of independent random variables from their expectation. Such inequalities often serve as an elegant and powerful tool and have countless applications. Various methods have been developed for proving such inequalities, such as martingale methods, Talagrand's induction method, or Marton's transportation-of-measure technique. In this talk we focus on the so-called entropy method, pioneered by Michel Ledoux, that is based on some simple information-theoretic inequalities. We present the main steps of the proof technique and discuss various inequalities and some applications.
Maximum likelihood estimation is a fundamental computational task in statistics. We discuss this problem for manifolds of low rank matrices. These represent mixtures of independent distributions of two discrete random variables. This non-convex optimization problems leads to some beautiful geometry, topology, and combinatorics. We explain how numerical algebraic geometry is used to find the global maximum of the likelihood function, and we present a remarkable duality theorem due to Draisma and Rodriguez.
In most situations of modern materials science, several scales have to be considered sequentially, or concurrently. This raises several fundamental questions and give rise to challenging computational issues. This is especially true when the materials considered have no simple, regular structure. The talk will overview some mathematical (and numerical) questions in this area. The level of exposition will be deliberately kept elementary. The flavour of the mathematical ingredients and techniques will be given, the focus being on the questions raised rather than the answers provided.
One-dimensional random interfaces occur for example in random growth models and random tiling models.
In some models their statistical properties turn out to be related to random matrix statistics. I will concentrate on random tiling or dimer models like the Aztec diamond and discuss the statistics of the tiles/dimers. The models can also be interpreted as certain random surfaces. Associated with these models are random point processes that are so called determinantal point processes. I will discuss these processes and their scaling limits which are expected to be universal scaling limits in the sense that they should be natural scaling limits in various models. The talk will give an overview of some developments in this area aimed at a general audience.
The effective dynamics of molecular system can be characterized by the switching behavior between several metastable states, the so-called conformations of the system that determine its functionality. Steering a molecular system from one conformation into another one on the one hand is a means to controlling its functionality while on the other hand it can be used to gather information about transition trajectories.
This talk considers optimal control problems that appear relevant in steered molecular dynamics (MD). It will be demonstrated how the associated Hamilton-Jacobi-Bellman (HJB) equation can be solved.
The main idea is to first approximate the dominant modes of the MD transfer operator by a low-dimensional Markov state model (MSM), and then solve the HJB for the MSM rather than the full MD. We then will discuss whether the resulting optimal control process may help to characterize the ensemble of transition trajectories.
The resulting method will be illustrated in application to the maximization of the population of alpha-helices in an ensemble of peptides.
The numerical solution of initial boundary value problems of partial differential equations with random input data by generalized polynomial chaos (gpc) and multilevel Monte-Carlo (MLMC) methods is considered.
In numerical methods based on gpc expansions, random coefficients are parametrized in terms of countably many random variables via a Karhunen-Loeve (KL) or a multiresolution (MR) expansion, and random solutions are represented in terms of polynomial chaos expansions of the inputs' coordinates. Thus, the PDE problems are reformulated as parametric families of deterministic initial boundary value problems on infinite dimensional parameter spaces. Their solutions are represented as gpc expansions in the (possibly countably many) input parameters. Convergence rates for best N-term approximations of the parametric solutions and Galerkin and Collocation algorithms which realize these best N-term approximation rates are presented. The complexity of these algorithms is compared to those of MLMC space-time discretizations, in terms of the regularity of the input data, in particular for PDEs with propagation of singularities.
Joint work with Siddartha Mishra, Roman Andreev, Andrea Barth, Claude Gittelson, Jonas Sukys, David Berhardsgruetter of SAM, ETH and with Albert Cohen (Paris), R. DeVore (Texas A&M) and Viet-Ha Hoang (NTU Singapore)
Infinite random graphs, such as Galton-Watson trees and percolation clusters, may have real numbers that are eigenvalues with probability one, providing a consistent "sound". These numbers correspond to atoms in their density-of-states measure.
When does the sound exist? When is the measure purely atomic? I will review many examples and show some elementary techniques developed in joint works with Charles Bordenave and Arnab Sen.
Remarkably, understanding the spectra of random graphs also yields results about deterministic Cayley graphs of lamplighter groups. In joint work with L. Grabowski we answer an old question of W. Luck.
In the late eighties Peter Winkler introduced the following problem: consider two independent discrete time random walks, X and Y, on the complete graph with N vertices. If the trajectories of X and Y are given, would it be possible, knowing all future steps of the walks, and just changing jump times only, keep X and Y apart forever, with positive probability? It became well known as Clairvoyant Demon Problem.
Soon after Noga Alon observed that this question is equivalent to the existence of a phase transition in a planar dependent percolation process. Remarkably, several other interesting questions such as Lipschitz embeddings of binary sequences and quasi-isometries between one dimensional random objects also could be reduced to a similar type of percolation.
During the lecture I will explain deep conceptual differences between N.Alon's percolation process and "usual" dependent percolation models, and difficulties to which it leads. At the second half of the talk I will present a proof of affirmative answer to the original Winkler question.
Gradient system can be understood as mathematical realizations of the Onsager principle in thermodynamics, which states that the flux is given by a positive definite operator, called Onsager operator, times the thermodynamic driving force. We show that reaction-diffusion systems satisfying a detailed-balance condition can be formulated as a gradient system for the relative entropy and an Onsager operator (inverse of the Riemannian tensor), which is given as a sum of a diffusion part (Wasserstein metric) and a reaction part. This approach allows us to connect gradient-flow formulations of discrete many-particle systems with their continuous limits. Moreover, well-established concepts for scalar equations, as geodesic lambda-convexity or exponential decay into equilibrium, can be generalized to these more general systems.
In this talk, we will consider several coupled multi-physics problems from different applications areas such as finance, porous media and continuum mechanics. Quite often the naive application of standard discretization schemes results in poor numerical results and spurious oscillations in time or locking in space can be observed. Many of these problems can be written as constrained minimization problems on a convex set. Due to the inequality character and the non-linearities in the formulation, the numerical simulation is still challenging. Here we address several of these challenges and present a variationally consistent space discretization and an energy preserving stable time integration method. The abstract framework of saddle point problems and local a priori estimates can help to provide optimal error bounds. Numerical results show the flexibility and the robustness.
While partial differential equations (PDEs) have many fascinating applications in image processing, their usefulness for lossy image compression has hardly been studied so far. In this talk we show that PDEs have a high potential to become alternatives to modern compression standards such as JPEG and JPEG 2000. The idea sounds temptingly simple: We keep only a small amount of the pixels and reconstruct the remaining data with PDE-based interpolation. This gives rise to three interdependent questions: 1. Which are the best data for being kept? 2. What are the most useful PDEs for data interpolation? 3. How can the selected data be encoded in an efficient way? Solving these problems requires to combine ideas from different mathematical disciplines such as mathematical modelling, shape optimisation, discrete optimisation, interpolation and approximation, information theory, and numerical methods for PDEs. Since the talk is intended for a broad audience, we focus on the main ideas, and no specific knowledge in image processing is required.
Treating complex design or optimization problems, especially under online constraints, is often practically feasible only when the underlying model is suitably reduced.
The so called Reduced Basis Method is an attractive variant of a model order reduction strategy for models based on parametric families of PDEs since it often allows one to rigorously monitor the accuracy of the reduced model. A key role is played by a greedy construction of reduced bases based on appropriate numerically feasible surrogates for the actual distance of the solution manifold from the reduced space.
While this concept has meanwhile been applied to a wide scope of problems the theoretical understanding concerning a certified accuracy is still essentially confined to the class of elliptic problems. This is reflected by the varying performance of these concepts, for instance, for transport dominated problems. We show that such a greedy space search is rate optimal, when compared with the Kolmogorov widths of the solution set, if the surrogates are in a certain sense tight. A key task is therefore to derive tight surrogates beyond the class of elliptic PDEs. We highlight the main underlying concepts centering on the convergence of greedy methods in Hilbert spaces and the derivation of well conditioned variational formulations for unsymmetric or singularly perturbed PDEs.
This is a report on a ongoing work with Anne Nouri from Marseille and Thierry Paul from Polytechnique Palaiseau. I dub the equation $$ \partial_t f +v\cdot\partial_x f-\partial_x(\int f(x,w,t)dw)\partial_v f=0 $$ Vlasov Dirac because it is a Vlasov type equation where the standard potential has been replaced by the Dirac mass. This turns out to be an important problem for different reasons. 1 It is genuine model for numerical simulation of plasmas.2 It exhibit some singular stabilities $\backslash$ unstabilities properties which can be more easily analyzed than the standard instabilities (Landau Damping and so on..) for the original Vlasov equation.3 It is at the cross road between mean field derivations and WKB or Wigner asymptotics.
Models of the Platonic and Archimedean solids may be found in many mathematics classrooms. Important examples of polyhedra in nature are the fullerenes in chemistry and icosahedral viral capsids in biology. Such polyhedra "self-assemble" from their constituent atoms or proteins. In the past few years, an exciting development in science has been the use of self-assembly as a strategy to build devices and containers on very small scales. These experiments raise some interesting mathematical questions. I will describe our work on "self folding" polyhedra and various questions on the combinatorics of assembly pathways.
This work is in collaboration with several students and David Gracias's lab at Johns Hopkins.
Planar maps are graphs embedded in the plane, considered up to continuous deformation. They have been studied extensively in combinatorics, and they have also significant geometrical applications. Particular cases of planar maps are p-angulations, where each face (meaning each component of the complement of edges) has exactly p adjacent edges.
Random planar maps have been used in theoretical physics, where they serve as models of random geometry. Our goal is to discuss the convergence in distribution of rescaled random planar maps viewed as random metric spaces. More precisely, we consider a random planar map M(n) which is uniformly distributed over the set of all p-angulations with n vertices. We equip the set of vertices of M(n) with the graph distance rescaled by the factor n to the power -1/4. Both in the case p=3 and when p>3 is even, we prove that the resulting random metric spaces converge as n tends to infinity to a universal object called the Brownian map. This convergence holds in the sense of the Gromov-Hausdorff distance between compact metric spaces. In the particular case of triangulations (p=3), this solves an open problem stated by Oded Schramm in his 2006 ICM paper. As a key tool, we use bijections between planar maps and various classes of labeled trees.
After very briefly surveying studies in complex systems biology, I discuss studies the other way around, i.e., studies of dynamical systems inspired by biology instead of biology from dynamical systems. Five topics are discussed.
The first issue concerns with reluctance to relaxation to equilibrium. Biological systems, in general, are kept out from falling to equilibrium. Put differently, is there some mechanism so that relaxation to equilibrium is hindered even in a closed physico-chemical system? We show that “transient dissipative structure” at macroscopic catalytic reaction systems show hindrance to relaxation to equilibrium, and then discuss the mechanism for it.
The second issue concerns with a Hamiltonian system with large degrees of freedom coupled globally with each other. In contrast to the naive expectation on equilibrium systems, we provide an example in which collective macroscopic oscillation continues over a large time span before relaxing to equilibrium, whose duration increases with the system size. This collective oscillation is explained by a self-consistent 'swing' mechanism.
The third issue is with regards to many degrees of freedom, or concerns with the number of dimension beyond which the system is regarded as many. We show that there is a critical number at 5~10, beyond which attractors often touch with the basin boundary. The number is discussed as magnitude relation between exponential and factorial.
The fourth problem is motivated by cell differentiation. We give examples of such differentiation of states, which are represented as internal bifurcation in coupled dynamical systems.
Last, if I have time, design of robust dynamical systems is discussed in relationship with evolution of gene regulatory networks, where linking between robustness to noise and to structural change is shown which, implies evolutionary congruence between developmental and mutational robustness.
The substrate for heredity, DNA, is chemically rather inert. However, it bears one of the elements of information that specify the form of the organism. How can a form be specified, starting from DNA ? Recent observations indicate that the dynamics of transcription — the process that decodes the hereditary information — can imprint forms of a certain topological class onto DNA. This topology allows both to optimize transcription and to facilitate the concerted change of the transcriptional status in response to environmental modifications. To the best of our knowledge, this morphogenetic event is first on the path from DNA to organism.
The solenoidal model of chromosomes will be described and anchored in transcriptomic data. New methods to analyse those data will be reported.
The periodic Lorentz gas describes a particle moving in a periodic array of spherical scatterers, and is one of the fundamental mathematical models for chaotic diffusion in a periodic set-up. In this lecture (aimed at a general mathematical audience) I describe the recent solution of a problem posed by Y. Sinai in the early 1980s, on the nature of the diffusion when the scatterers are very small. The problem is closely related to some basic questions in number theory, in particular the distribution of lattice points visible from a given position, cf. Polya's 1918 paper "[...] ueber die Sichtweite im Walde" (Polya's orchard problem). The key technology in our approach is measure rigidity, a branch of ergodic theory that has proved valuable in recent solutions of other problems in number theory and mathematical physics, such as the value distribution of quadratic forms at integers, quantum unique ergodicity and questions of diophantine approximation.
(This lecture is based on joint work with A. Strombergsson, Uppsala.)
Engineers are forced to develop processes and products in a set time frame. Thus, they have to provide quantitative and reliable predictions despite incomplete knowledge on the underlying physics. In the past, empirical equations or phenomenological models were used, but this required time consuming validation and expensive pilot plant tests for each material or any change of the processing window. The trend to shorter product life cycles and an increasing market pressure to reduce costs demand new tools for an assessment and prediction of material properties for a wide range of processing conditions without complete validation. To meet these requirements computer simulations have been employed recently. Respective codes have to accommodate multiscale information, from atomistics to a macroscopic scale and generate an output on an engineering level. This so called ‘integral materials modeling’ will be presented for the example of aluminum sheet processing.
We review the problem of heat conduction in Hamiltonian systems and discuss the derivation of Fourier's law from a truncated set of equations for the stationary state of a mechanical system coupled to boundary noise.
Confining geometries are realised in nature and technology in manifold ways, e.g. in molecular compartments of (living) biological cells, in nanoporous media, in thin (~ 10 nm) polymer layers or in long channels with nanometer diameter. In the talk briefly the dynamics of low molecular weight systems (as model system) will be discussed and then emhpasis is given to polymeric systems - prepared as isolated chains, as thin glassy layers or as brushes. It will turn out that the lengthscale on which molecular fluctuations take place becomes essential and that the confinement can given rise to new relaxational modes. References:1. Kremer, F., A. Huwe, M. Arndt, P. Behrens and W. Schwieger "How many molecules form a liquid ?" J. Phys. Condens Matter 11, A175-A188 (1999)1. Huwe, A., F. Kremer, P. Behrens and W. Schwieger "Molecular dynamics in confining space: From the single molecule to the liquid state" Phys. Rev. Lett. 82, 11, p.2338-2341 (1999) 2. Spange, St., A. Gräser, A. Huwe, F. Kremer, C. Tintemann and P. Behrens "Cationic host-guest polymerization of N-vinylcarbazole and vinyl ethers in MCM-41, MCM-48 and Nanoporous glasses", Chem. European Journ. 7, No. 1 pp. 3722-3728 (2001) 3. Hartmann, L., W. Gorbatschow, J. Hauwede and F. Kremer "Molecular dynamics in thin films of isotactic PMMA", Eur.Phys. J.E, 8, pp.145-154 (2002) 4. Kremer,F., A. Huwe, A. Schönhals and S.A. Rozanski "Molekular Dynamics in Confining Space", Chapt. VI in "Broadband Dielectric Spectroscopy", (Eds. F. Kremer, A. Schönhals), Springer-Verlag Berlin (2002) 5. Serghei, A., F. Kremer " Confinement-induced relaxation process in thin films of cis-polyisoprene " Phys. Rev. Letters Vol. 91. No.16, pp. 165702-1-165702-4 (2003) 6. Kremer, F., L. Hartmann, A. Serghei, P. Pouret and L. Léger "Molecular dynamics in thin grafted and spin-coated polymer layers" EJP E Vol.12, no.1, pp.139-142 (2003) 7. Serghei, A., F. Kremer, W. Kob " Chain conformation in thin polymer layers as revealed by simulations of ideal random walks" EJP E Vol.12, No. 1 pp. 143-146 (2003)