It has been a quarter century now that Banach spaces of anisotropic distributions have been introduced to study statistical properties of chaotic dynamical systems via Ruelle transfer operators. This approach gives both new proofs of classical results and new results. The first successes were obtained for smooth hyperbolic dynamics. However, some natural dynamical systems, such as dispersive (Sinai) billiards are not smooth. The singularities cause challenging technical difficulties. We shall survey new results on dispersive (discrete or continuous time) dispersive billiards obtained in the past five years using anistotropic Banach spaces, ending with a very recent construction of the measure of maximal entropy for billiard flows satisfying a condition of sparse recurrence to singularities.
(In this joint work with Carrand and Demers, we obtain Bernoullicity, but no control of the speed of mixing.)
Starting from the old Greeks I will try to explain some basic ideas behind resolutions of singularities and Springer theory using rings of invariants. If time allows I will indicate how this connects to categorified knot invariants.
The study of properties that makes a combinatorial object (such as a permutation) to be quasirandom, i.e., to resemble a truly random object of the same kind, is an important and active line of research in combinatorics, which has various applications in computer science and statistics. The standard combinatorial way of comprehending quasirandomness evolved from nowadays classical results by Rödl, Thomason, and Chung, Graham and Wilson from the 1980s and appears in fundamental concepts in modern combinatorics such as the Regularity Method of Szemerédi.
The talk will start with discussing how the above mentioned classical results on quasirandom objects can be viewed through the lenses of the theory of combinatorial limits. We then employ analytic tools provided by the theory of combinatorial limits to solve several open problems concerning quasirandom objects of various kinds (in particular, directed graphs, permutations and Latin squares). At the end of the talk, we briefly explore the relation of presented results to the hypergraph regularity, which was developed independently by Gowers, and by Nagle, Rödl, Schacht and Skokan about two decades ago, and to the stochastic block model in statistics and network science.
A guiding problem in symplectic geometry is the "Lagrangian intersection problem". This problem asks about the number of intersection points between certain smooth Lagrangian submanifolds in a symplectic manifold. It was originally promoted by V.Arnold, who was motivated by considerations from classical physics. While the original version of the Lagrangian intersection problem is now rather well-understood, I will discuss recent work with Shaoyun Bai which initiates the study of the Lagrangian intersection problem for certain singular Lagrangian subsets (called "skeleta") which are important in symplectic geometry. Classical tools do not work in this context. Instead, we introduce new methods which are motivated by "quantum" geometry and homological mirror symmetry.
This talk will discuss recent joint work with Matthew Kwan, Ashwin Sah, and Mehtaab Sawhney, proving an old conjecture of Erdös and McKay (for which Erdős offered $100). This conjecture concerns Ramsey graphs, which are (roughly speaking) graphs without large complete or empty subgraphs. In order to prove the conjecture, we study edge-statistics in Ramsey graphs, i.e. we study the distribution of the number of edges in a random vertex subset of a Ramsey graph. After discussing some background on Ramsey graphs, the talk will explain our results and give an overview of our proof approach.
Already Jesse Douglas was aware of the fact that minimizing sequences for Dirichlet's integral of annulus-type surfaces spanning two parallel, co-axial planar circles might degenerate into a pair of discs. The characterization of "bubbling" of (approximate) harmonic maps from a closed surface to a closed Riemannian target manifold allowed Sacks-Uhlenbeck to conclude the existence of harmonic representatives for every homotopy class of maps in the case of target manifolds whose second fundamental group was trivial. Chang-Yang were able to give sufficient conditions for solving Nirenberg's problem for conformal metrics of prescribed Gauss curvature on the 2-sphere by studying the contribution of degenerate conformal metrics to the topological degree of the associated variational problem.Inspite of these achievements, there still are many open questions related to the possible topological degeneration of comparison maps or "bubbling" in geometric variational problems, and we will discuss some of these questions.
Representation theory and quantum (enumerative) geometry are two areas of mathematics with physics origins. A new field is emerging at their intersection. I will describe two of its applications, to old problems in integrable lattice models and knot theory.
The lecture explains joint work with Simon Brendle on the deformation of hypersurfaces in Riemannian manifolds by a fully non-linear, parabolic geometric evolution system. The surfaces are assumed to satisfy a natural curvature condition ("2-convexity") that is weaker than convexity and move with a speed given by a non-linear mean value of their principal curvatures. It is explained how the possible singularities of the flow can be classified and overcome by surgery to construct a long-term solution of the flow that leads to the classification of all 2-convex surfaces in a natural class of Riemannian manifolds.
Felix Klein (1849-1925) is characterized by outstanding results in mathematics, its applications, and as a head of the reform of mathematical instruction. From early on, he was internationally oriented and supported mathematically gifted students regardless of their sex, religion, and nationality. This presentation will focus on Klein’s engagement as an impetus behind women studying mathematics. Klein cooperated with numerous foreign colleagues who also promoted women in mathematics. Among them were the geometrician Gaston Darboux (1842-1917) in France, Luigi Cremona (1830-1903) in Italy, Arthur Cayley (1821-1895) in the United Kingdom, Hieronymus G. Zeuthen (1839-1920) in Denmark, and James Joseph Sylvester (1814-1897). Since the 1890s, when he began to create a famous international centre of mathematics at the University of Göttingen, Klein not only allowed male mathematicians from abroad to attend his courses, but also women. David Hilbert (1862-1943) followed in Klein’s footsteps.
The present contribution examines the beginning of women’s mathematical study at German universities and analyses the special efforts of Klein and Hilbert. It will be shown that they had to fight for the right of women to study and to receive doctoral and post-doctoral degrees. The analysis is based on archival materials in Göttingen related to the careers of Klein and Hilbert, among other sources. In this context, I will also discuss factors that influenced women’s careers in mathematics and still have a lasting effect today.
The fundamental question in cognitive neuroscience—what are the key coding principles of the brain enabling human thinking—still remains largely unanswered. Evidence from neurophysiology suggests that place and grid cells in the hippocampal-entorhinal system provide an internal spatial map, the brain’s SatNav—the most intriguing neuronal coding scheme outside the sensory system. Our framework is concerned with the key idea that this navigation system in the brain—potentially as a result of evolution—provides the blueprint for a neural metric underlying human cognition. Specifically, we propose that the brain maps experience in so-called ‘cognitive spaces’. In this talk, I will give an overview of our theoretical framework and experimental approach and will present show-case examples from fMRI, MEG and virtual reality experiments identifying cognitive coding mechanisms in the hippocampal-entorhinal system and beyond. Finally, I will sketch out our long-term cognitive neuroscience research program at the MPI, including key translations to information technology and the clinic.
Further reading:Bellmund, J. L. S., Gärdenfors, P., Moser, E. I., & Doeller, C. F. (2018). Navigating cognition: Spatial codes for human thinking. Science, 362(6415), eaat6766. https://doi.org/10.1126/science.aat6766
Linear recurrence sequences (LRS), such as the Fibonacci numbers, permeate vast areas of mathematics and computer science. In this talk, we consider three natural decision problems for LRS over the integers, namely the Skolem Problem (does a given LRS have a zero?), the Positivity Problem (are all terms of a given LRS positive?), and the Ultimate Positivity Problem (are all but finitely many terms of a given LRS positive?). Such questions have applications in a wide array of scientific areas, ranging from theoretical biology and software verification to quantum computing and statistical physics. Perhaps surprisingly, the study of decision problems for linear recurrence sequences (and more generally linear dynamical systems) involves techniques from a variety of mathematical fields, including analytic and algebraic number theory, Diophantine geometry, and algebraic geometry. I will survey some of the known results as well as recent advances and open problems.
This is joint work with James Worrell
Despite the outstanding success of deep neural networks in real-world applications, most of the related research is empirically driven and a mathematical foundation is almost completely missing. One central task of a neural network is to approximate a function, which for instance encodes a classification task. In this talk, we will be concerned with the question, how well a function can be approximated by a neural network with sparse connectivity. Using methods from approximation theory and applied harmonic analysis, we will derive a fundamental lower bound on the sparsity of a neural network. By explicitly constructing neural networks based on certain representation systems, so-called $\alpha$-shearlets, we will then demonstrate that this lower bound can in fact be attained. Finally, we present numerical experiments, which surprisingly show that already the standard backpropagation algorithm generates deep neural networks obeying those optimal approximation rates. This is joint work with H. Bölcskei (ETH Zurich), P. Grohs (Uni Vienna), and P. Petersen (TU Berlin).
In his last letter to Hardy, four months before his early death in 1920, Ramanujan gave a list of 17 power series that he called "mock theta functions" and that he was sure would eventually become important in mathematics. An understanding of the properties of these functions and their generalizations ("mock modular forms") came only in 2002 with the thesis of Sander Zwegers, who showed that they have a weakened modular transformation property with an obstruction to true modularity that is given by an auxiliary function called the "shadow" and which is itself a modular form.
More recently it has transpired that these mock modular forms also appear naturally in physics, e.g. in the string theory of black holes. Even more recently they have also occurred in the discovery of new varieties of "Moonshine" (Mathieu moonshine, umbral moonshine,...) generalizing the famous Monstrous Moonshine of the 80s. We will give a survey of some of these developments.
A control system is a dynamical system on which one can act thanks to what is called the control. For example, in a car, one can turn the steering wheel, press the accelerator pedal etc. These are the control(s). One of the main problems in control theory is the controllability problem. It is the following one. One starts from a given situation and there is a given target. The controllability problem is to see if, by using some suitable controls depending on time, the given situation and target, one can move from the given situation to the target. We study this problem with a special emphasis on the case where the nonlinearities play a crucial role. In finite dimension in this case a key tool is the use of iterated Lie brackets as shown in particular by the Chow theorem. This key tool gives also important results for some control systems modeled by means of partial differential equations. However we do not know how to use it for many other control systems modeled by means partial differential equations. We present methods to avoid the use of iterated Lie brackets. We give applications of these methods to the control of various physical control systems (Euler and Navier-Stokes equations of incompressible fluids, 1-D hyperbolic systems, heat equations, shallow water equations, Korteweg-de Vries equations, Schroedinger equations...) and to the stabilization problem, another of the main problems in control theory.
Advances in computation, communication and embedded systems are enabling the deployment of cyberphysical systems of unprecedented complexity. This trend, which paves the way to technologies such as the Internet of Things, Industry 4.0, and the Industrial Internet, must be paralleled by new approaches in networked control, adapted to large-scale interconnections of subsystems that interact and exchange information.
In this talk we will address scalability of control design, focusing on methods where the complexity of synthesising a local controller is independent of the overall system size. Scalable control design is especially needed in industrial applications where the number of subsystems changes over time, sensors and actuators must be replaced with minimal human intervention, or no global model is available. We will present methods for the plug-and-play synthesis of local controllers, enabling the seamless addition and removal of subsystems while denying automatically plug in and out requests that are dangerous for safety or stability.
Then, we will describe the plug-and-play design of voltage controllers for islanded microgrids, which are prominent examples of cyberphysical systems. The goal is to allow the connection and disconnection of generation units and loads while preserving overall voltage stability. Simulations and experiments will be presented for illustrating the applicability of control synthesis procedures. This is a first step towards the deployment of multi-owner, autonomous energy islands with flexible size and topology.
The final part of the talk will be devoted to research perspectives towards enhanced adaptivity and autonomy of cyberphysical control systems.
In the last ten years the use of splines as a tool for the discretisation of partial differential equations has gained interests thanks to the advent of isogeometric analysis. For this class of methods, all robust and accurate techniques aiming at enhancing the flexibility of splines, while keeping their structure, are of paramount importance since the tensor product structure underlying spline constructions is far too restrictive in the context of approximation of partial differential equations (PDEs).
I will describe various approaches, from adaptivity with regular splines, to regular patch gluing and to trimming. Moreover, I will show applications and test benches involving large deformation problems with contact and quasi-incompressible materials.
Cancer can be viewed as an evolutionary process, where the accumulation of mutations in a cell eventually causes cancer. The cells in a tissue are not only organized spatially, but typically hierarchically. This affects the dynamics in these tissues and inhibits the accumulation of mutations. Mutations arising in primitive cells can lead to long lived or even persistent clones, but mutations arising in further differentiated cells are short lived and do not affect the organism. Both the spatial structure and the hierarchical structure can be modeled mathematically. The effect of spatial structure on evolutionary dynamics is non-trivial and depends on the precise implementation of the model. Hierarchical structure can delay or suppress the dynamics of cancer. While these models can lead to important conceptual insights, fitting these models directly to data remains challenging. However, closely related models have the remarkable property that they can make a prediction with data obtained from a single measurement. References:Werner et al., “Dynamics of Mutant Cells in Hierarchical Organized Tissues”, PLOS CB (2011)Hindersin & Traulsen, “Most Undirected Random Graphs Are Amplifiers of Selection for Birth-Death Dynamics, but Suppressors of Selection for Death-Birth Dynamics “ PLOS CB (2015)Werner, Beier et al., “Reconstructing the in vivo dynamics of hematopoietic stem cells from telomere length distributions”, eLife (2016)
It is well known since de Moivre and Laplace that the Gaussian law describes the fluctuations of large independent particle systems. In this talk, we shall discuss extensions to strongly coupled systems such as random matrices or random tiling.
Many problems in the physical sciences require the determination of an unknown field from a finite set of indirect measurements. Examples include oceanography, oil recovery,water resource management and weather forecasting. The Bayesian approach to these problems is natural for many reasons, including the under-determined and ill-posed nature of the inversion, the noise in the data and the uncertainty in the differential equation models used to describe complex mutiscale physics. In this talk I will describe the advantages of formulating Bayesian inversion on function space in order to solve these problems. I will overview theoretical results concerning well-posedness of the posterior distribution, approximation theorems for the posterior distribution, and specially constructed MCMC methods to explore the posterior distribution. Special attention will be paid to various prior (regularization) strategies, including Gaussian random fields, and various geometric parameterizations such as the level set approach to piecewise constant reconstruction.[1] M. Dashti, A.M. Stuart, "The Bayesian Approach To Inverse Problems". To appear in The Handbook of Uncertainty Quantification, Springer, 2016. http://arxiv.org/abs/1302.6989[2] S.L.Cotter, G.O.Roberts, A.M. Stuart and D. White, "MCMC methods for functions: modifying old algorithms to make them faster". Statistical Science, 28 (2013) 424-446. http://homepages.warwick.ac.uk/~masdr/JOURNALPUBS/stuart103.pdf[3] M.A. Iglesias, K. Lin, A.M. Stuart, "Well-Posed Bayesian Geometric Inverse Problems Arising in Subsurface Flow", Inverse Problems, 30 (2014) 114001. http://arxiv.org/abs/1401.5571[4] M.A. Iglesias, Y. Lu, A.M. Stuart, "A level-set approach to Bayesian geometric inverse problems", In Preparation, 2014.
Consider an unweighted k-nearest neighbor graph that has been built on a random sample from some unknown density p on Rd. Assume we are given nothing but the unweighted (!) adjacency matrix of the graph: we know who is among the k nearest neighbors of whom, but we do not know the point locations or any distances or similarity values between the points. Is it then possible to recover the original point configuration or estimate the underlying density p, just from the adjacency matrix of the unweighted graph? As I will show in the talk, the answer is yes. I present a proof for this result, and also discuss relations to the problem of ordinal embedding.
We will give an introduction to the methods of the book "Towards the mathematics of quantum field theory". One may sum-up these methods by saying that the geometry of the physicists "spaces of fields" can be studied either by a coordinate (i.e. algebraic or analytic) approach, or by a parametrized (i.e., geometric) approach. Both approaches are complementary, and the aim of categorical methods in geometry are to improve them by introducing new types of algebras and new types of spaces, in a way that is adapted to the study of both algebraic and geometric obstruction problems. Since such obstruction problems are widespread in quantum field theory, these new methods give a kind of "geometry of obstructions", that gives a way to understand obstruction theory in a geometric manner.
This talk reviews some of the phenomena and theoretical results on the long-time energy behaviour of continuous and discretized oscillatory systems that can be explained by modulated Fourier expansions: long-time preservation of total and oscillatory energies in oscillatory Hamiltonian systems and their numerical discretisations, near-conservation of energy and angular momentum of symmetric multistep methods for celestial mechanics, metastable energy strata in nonlinear wave equations, and long-time stability of plane wave solutions of nonlinear Schroedinger equations.
We describe what modulated Fourier expansions are and what they are good for. Much of the presented work was done in collaboration with Ernst Hairer. Some of the results on modulated Fourier expansions were obtained jointly with David Cohen and Ludwig Gauckler.
We discuss a family of ideas, algorithms, and results for learning from high-dimensional data. These methods rely on the idea that complex high-dimensional data has geometric structures, often low-dimensional, that, once discovered, assist in a variety of statistical learning tasks, as well as in tasks such as data visualization. We discuss various realizations of these ideas, from manifold learning and dimension reduction techniques to new techniques based on suitable multiscale geometric decompositions of the data. We will then discuss how these multiscale decompositions may be used to solve various tasks, from dictionary learning to classification, to the construction of probabilistic models for the data, to approximation of high-dimensional stochastic systems.
Dynamics in nature often proceed in the form of rare reactive events: The system under study spends very long periods of time at various metastable states and only very rarely transitions from one metastable state to another. Conformation changes of macromolecules, chemical reactions in solution, nucleation events during phase transitions, thermally induced magnetization reversal in micromagnets, etc. are just a few examples of such reactive events. One can often think of the dynamics of these systems as a navigation over a potential or free energy landscape, under the action of small amplitude noise. In the simplest situations the metastable states are then regions around the local minima on this landscape, and transition events between these regions are rare because the noise has to push the system over the barriers separating them. This is the picture underlying classical tools such as transition state theory or Kramers reaction rate theory, and it can be made mathematically precise within the framework of large deviation theory. In complex high dimensional systems, this picture can however be naive because entropic (i.e. volume) effects start to play an important role: Local features of the energy, such as the location of its minima or saddle points, may have much less of an impact on its dynamics than global features such as the width of low lying basins on the landscape: in these situations a more general framework for the description of metastability is required. In this talk, I will discuss tools that have been introduced to that effect, based e.g. on potential theory, and illustrate them on various examples, including the folding of toy models of proteins and the rearrangement of Lennard-Jones clusters.
It is now 25 years ago that Hara and Slade published their seminal work on the mean-field behavior of percolation in high-dimensions, showing that at criticality there is no percolation and identifying several percolation critical exponents. The main technique used is the lace expansion, a perturbation technique that allows us to compare percolation paths to random walks based on the idea that faraway pieces of percolation paths are almost independent in high dimensions.In the past few years, a number of novel results have appeared for high-dimensional percolation. I intend to highlight the following topics: The recent computer-assisted proof, with Robert Fitzner, that identifies the critical behavior of nearest-neighbor percolation above 14 dimensions using the so-called Non-Backtracking Lace Expansion (NoBLE). While these results are expected to hold above 6 dimensions, the previous and unpublished proof by Hara and Slade only applied above 18 dimensions; The identification of arm exponents in high-dimensional percolation in two works by Asaf Nachmias and Gady Kozma, using a clever and novel difference inequality argument, and its implications for the incipient infinite cluster and random walks on them; The finite-size scaling for percolation on a high-dimensional torus, where the largest connected components share many features to the Erdos-Renyi random graph. In particular, substantial progress has been made concerning percolation on the hypercube, where joint work with Asaf Nachmias avoids the lace expansion altogether.We assume no prior knowledge about percolation.
Measure concentration ideas developed during the last century in various parts of mathematics, including functional analysis, probability theory and statistical mechanics, areas typically dealing with models involving an infinite number of variables. After several early observations, the real birth of measure concentration took place in the early seventies with the new proof by V. Milman of Dvoretzky's theorem on spherical sections of convex bodies in high dimension. Since then, the concentration of measure phenomenon spred out to a wide range of illustrations and applications, and became a central tool and viewpoint in the quantitative analysis of asymptotics properties in numerous topics of interest including geometric analysis, probability theory, statistical mechanics, mathematical statistics and learning theory, random matrix theory, randomized algorithms, complexity etc. The talk will feature basics aspects and some of these illustrations.
Colloquium on the occasion of Wolfgang Hackbuschs retirement.In joint work with Simon Blatt and Melanie Rupflin we lay out a functional analytic framework for the Lane-Emden equation $-\Delta u = u|u|^{p-2}$ on an $n$-dimensional domain, $n\ge 3$, in the supercritical regime when $p>\frac{2n}{n-2}$ and study the associated gradient flow.
The soliton resolution conjecture for the focusing nonlinear Schrodinger equation (NLS) is the vaguely worded claim that a global solution of the NLS, for generic initial data, will eventually resolve into a radiation component that disperses like a linear solution, plus a localized component that behaves like a soliton or multi-soliton solution. Considered to be one of the fundamental problems in the area of nonlinear dispersive equations, this conjecture has eluded a proof or even a precise formulation till date. I will present a theorem that proves a "statistical version" of this conjecture at mass-subcritical nonlinearity. The proof involves a combination of techniques from large deviations, PDE, harmonic analysis and bare hands probability theory.
In this talk we discuss concentration inequalities that estimate deviations of functions of independent random variables from their expectation. Such inequalities often serve as an elegant and powerful tool and have countless applications. Various methods have been developed for proving such inequalities, such as martingale methods, Talagrand's induction method, or Marton's transportation-of-measure technique. In this talk we focus on the so-called entropy method, pioneered by Michel Ledoux, that is based on some simple information-theoretic inequalities. We present the main steps of the proof technique and discuss various inequalities and some applications.
Maximum likelihood estimation is a fundamental computational task in statistics. We discuss this problem for manifolds of low rank matrices. These represent mixtures of independent distributions of two discrete random variables. This non-convex optimization problems leads to some beautiful geometry, topology, and combinatorics. We explain how numerical algebraic geometry is used to find the global maximum of the likelihood function, and we present a remarkable duality theorem due to Draisma and Rodriguez.
In most situations of modern materials science, several scales have to be considered sequentially, or concurrently. This raises several fundamental questions and give rise to challenging computational issues. This is especially true when the materials considered have no simple, regular structure. The talk will overview some mathematical (and numerical) questions in this area. The level of exposition will be deliberately kept elementary. The flavour of the mathematical ingredients and techniques will be given, the focus being on the questions raised rather than the answers provided.
One-dimensional random interfaces occur for example in random growth models and random tiling models.
In some models their statistical properties turn out to be related to random matrix statistics. I will concentrate on random tiling or dimer models like the Aztec diamond and discuss the statistics of the tiles/dimers. The models can also be interpreted as certain random surfaces. Associated with these models are random point processes that are so called determinantal point processes. I will discuss these processes and their scaling limits which are expected to be universal scaling limits in the sense that they should be natural scaling limits in various models. The talk will give an overview of some developments in this area aimed at a general audience.
The effective dynamics of molecular system can be characterized by the switching behavior between several metastable states, the so-called conformations of the system that determine its functionality. Steering a molecular system from one conformation into another one on the one hand is a means to controlling its functionality while on the other hand it can be used to gather information about transition trajectories.
This talk considers optimal control problems that appear relevant in steered molecular dynamics (MD). It will be demonstrated how the associated Hamilton-Jacobi-Bellman (HJB) equation can be solved.
The main idea is to first approximate the dominant modes of the MD transfer operator by a low-dimensional Markov state model (MSM), and then solve the HJB for the MSM rather than the full MD. We then will discuss whether the resulting optimal control process may help to characterize the ensemble of transition trajectories.
The resulting method will be illustrated in application to the maximization of the population of alpha-helices in an ensemble of peptides.
The numerical solution of initial boundary value problems of partial differential equations with random input data by generalized polynomial chaos (gpc) and multilevel Monte-Carlo (MLMC) methods is considered.
In numerical methods based on gpc expansions, random coefficients are parametrized in terms of countably many random variables via a Karhunen-Loeve (KL) or a multiresolution (MR) expansion, and random solutions are represented in terms of polynomial chaos expansions of the inputs' coordinates. Thus, the PDE problems are reformulated as parametric families of deterministic initial boundary value problems on infinite dimensional parameter spaces. Their solutions are represented as gpc expansions in the (possibly countably many) input parameters. Convergence rates for best N-term approximations of the parametric solutions and Galerkin and Collocation algorithms which realize these best N-term approximation rates are presented. The complexity of these algorithms is compared to those of MLMC space-time discretizations, in terms of the regularity of the input data, in particular for PDEs with propagation of singularities.
Joint work with Siddartha Mishra, Roman Andreev, Andrea Barth, Claude Gittelson, Jonas Sukys, David Berhardsgruetter of SAM, ETH and with Albert Cohen (Paris), R. DeVore (Texas A&M) and Viet-Ha Hoang (NTU Singapore)
Infinite random graphs, such as Galton-Watson trees and percolation clusters, may have real numbers that are eigenvalues with probability one, providing a consistent "sound". These numbers correspond to atoms in their density-of-states measure.
When does the sound exist? When is the measure purely atomic? I will review many examples and show some elementary techniques developed in joint works with Charles Bordenave and Arnab Sen.
Remarkably, understanding the spectra of random graphs also yields results about deterministic Cayley graphs of lamplighter groups. In joint work with L. Grabowski we answer an old question of W. Luck.
In the late eighties Peter Winkler introduced the following problem: consider two independent discrete time random walks, X and Y, on the complete graph with N vertices. If the trajectories of X and Y are given, would it be possible, knowing all future steps of the walks, and just changing jump times only, keep X and Y apart forever, with positive probability? It became well known as Clairvoyant Demon Problem.
Soon after Noga Alon observed that this question is equivalent to the existence of a phase transition in a planar dependent percolation process. Remarkably, several other interesting questions such as Lipschitz embeddings of binary sequences and quasi-isometries between one dimensional random objects also could be reduced to a similar type of percolation.
During the lecture I will explain deep conceptual differences between N.Alon's percolation process and "usual" dependent percolation models, and difficulties to which it leads. At the second half of the talk I will present a proof of affirmative answer to the original Winkler question.
Gradient system can be understood as mathematical realizations of the Onsager principle in thermodynamics, which states that the flux is given by a positive definite operator, called Onsager operator, times the thermodynamic driving force. We show that reaction-diffusion systems satisfying a detailed-balance condition can be formulated as a gradient system for the relative entropy and an Onsager operator (inverse of the Riemannian tensor), which is given as a sum of a diffusion part (Wasserstein metric) and a reaction part. This approach allows us to connect gradient-flow formulations of discrete many-particle systems with their continuous limits. Moreover, well-established concepts for scalar equations, as geodesic lambda-convexity or exponential decay into equilibrium, can be generalized to these more general systems.
In this talk, we will consider several coupled multi-physics problems from different applications areas such as finance, porous media and continuum mechanics. Quite often the naive application of standard discretization schemes results in poor numerical results and spurious oscillations in time or locking in space can be observed. Many of these problems can be written as constrained minimization problems on a convex set. Due to the inequality character and the non-linearities in the formulation, the numerical simulation is still challenging. Here we address several of these challenges and present a variationally consistent space discretization and an energy preserving stable time integration method. The abstract framework of saddle point problems and local a priori estimates can help to provide optimal error bounds. Numerical results show the flexibility and the robustness.
While partial differential equations (PDEs) have many fascinating applications in image processing, their usefulness for lossy image compression has hardly been studied so far. In this talk we show that PDEs have a high potential to become alternatives to modern compression standards such as JPEG and JPEG 2000. The idea sounds temptingly simple: We keep only a small amount of the pixels and reconstruct the remaining data with PDE-based interpolation. This gives rise to three interdependent questions: 1. Which are the best data for being kept? 2. What are the most useful PDEs for data interpolation? 3. How can the selected data be encoded in an efficient way? Solving these problems requires to combine ideas from different mathematical disciplines such as mathematical modelling, shape optimisation, discrete optimisation, interpolation and approximation, information theory, and numerical methods for PDEs. Since the talk is intended for a broad audience, we focus on the main ideas, and no specific knowledge in image processing is required.
Treating complex design or optimization problems, especially under online constraints, is often practically feasible only when the underlying model is suitably reduced.
The so called Reduced Basis Method is an attractive variant of a model order reduction strategy for models based on parametric families of PDEs since it often allows one to rigorously monitor the accuracy of the reduced model. A key role is played by a greedy construction of reduced bases based on appropriate numerically feasible surrogates for the actual distance of the solution manifold from the reduced space.
While this concept has meanwhile been applied to a wide scope of problems the theoretical understanding concerning a certified accuracy is still essentially confined to the class of elliptic problems. This is reflected by the varying performance of these concepts, for instance, for transport dominated problems. We show that such a greedy space search is rate optimal, when compared with the Kolmogorov widths of the solution set, if the surrogates are in a certain sense tight. A key task is therefore to derive tight surrogates beyond the class of elliptic PDEs. We highlight the main underlying concepts centering on the convergence of greedy methods in Hilbert spaces and the derivation of well conditioned variational formulations for unsymmetric or singularly perturbed PDEs.
This is a report on a ongoing work with Anne Nouri from Marseille and Thierry Paul from Polytechnique Palaiseau. I dub the equation $$ \partial_t f +v\cdot\partial_x f-\partial_x(\int f(x,w,t)dw)\partial_v f=0 $$ Vlasov Dirac because it is a Vlasov type equation where the standard potential has been replaced by the Dirac mass. This turns out to be an important problem for different reasons. 1 It is genuine model for numerical simulation of plasmas.2 It exhibit some singular stabilities $\backslash$ unstabilities properties which can be more easily analyzed than the standard instabilities (Landau Damping and so on..) for the original Vlasov equation.3 It is at the cross road between mean field derivations and WKB or Wigner asymptotics.
Models of the Platonic and Archimedean solids may be found in many mathematics classrooms. Important examples of polyhedra in nature are the fullerenes in chemistry and icosahedral viral capsids in biology. Such polyhedra "self-assemble" from their constituent atoms or proteins. In the past few years, an exciting development in science has been the use of self-assembly as a strategy to build devices and containers on very small scales. These experiments raise some interesting mathematical questions. I will describe our work on "self folding" polyhedra and various questions on the combinatorics of assembly pathways.
This work is in collaboration with several students and David Gracias's lab at Johns Hopkins.
Planar maps are graphs embedded in the plane, considered up to continuous deformation. They have been studied extensively in combinatorics, and they have also significant geometrical applications. Particular cases of planar maps are p-angulations, where each face (meaning each component of the complement of edges) has exactly p adjacent edges.
Random planar maps have been used in theoretical physics, where they serve as models of random geometry. Our goal is to discuss the convergence in distribution of rescaled random planar maps viewed as random metric spaces. More precisely, we consider a random planar map M(n) which is uniformly distributed over the set of all p-angulations with n vertices. We equip the set of vertices of M(n) with the graph distance rescaled by the factor n to the power -1/4. Both in the case p=3 and when p>3 is even, we prove that the resulting random metric spaces converge as n tends to infinity to a universal object called the Brownian map. This convergence holds in the sense of the Gromov-Hausdorff distance between compact metric spaces. In the particular case of triangulations (p=3), this solves an open problem stated by Oded Schramm in his 2006 ICM paper. As a key tool, we use bijections between planar maps and various classes of labeled trees.
We begin by introducing an action principle defined on a finite set of points. This action principle is causal in the sense that it generates a relation on pairs or points which distinguishes between spacelike and timelike separation. In this way, minimizing the action gives rise to a "discrete causal structure". We generalize our action principle to include continuum space-times and review existence results. We outline how the same action principle can be formulated in Minkowski space to obtain a formulation of quantum field theory.
In the second part of the talk, we consider as a special case a variational principle for Borel measures on the two-sphere. We prove that the support of every minimizing measure has no interior. This can be understood that when minimizing the action, a spontaneous symmetry breaking effect leads to the formation of a discrete structure.
We discuss several theorems relating the connection between the local CR-embeddability of 3-dimensional CR manifolds, and the existence of algebraically special Maxwell and gravitational fields. We reduce the Einstein equations for spacetimes associated with such fields to a system of CR invariant equations on a 3-dimensional CR manifold that is defined by the fields. Using the reduced Einstein equations, we construct two independent CR functions, which give the embedding. We also point out that the Einstein equations imply that the spacetime metric, after rescaling, become well defined on a circle bundle over the CR manifold.
After very briefly surveying studies in complex systems biology, I discuss studies the other way around, i.e., studies of dynamical systems inspired by biology instead of biology from dynamical systems. Five topics are discussed.
The first issue concerns with reluctance to relaxation to equilibrium. Biological systems, in general, are kept out from falling to equilibrium. Put differently, is there some mechanism so that relaxation to equilibrium is hindered even in a closed physico-chemical system? We show that “transient dissipative structure” at macroscopic catalytic reaction systems show hindrance to relaxation to equilibrium, and then discuss the mechanism for it.
The second issue concerns with a Hamiltonian system with large degrees of freedom coupled globally with each other. In contrast to the naive expectation on equilibrium systems, we provide an example in which collective macroscopic oscillation continues over a large time span before relaxing to equilibrium, whose duration increases with the system size. This collective oscillation is explained by a self-consistent 'swing' mechanism.
The third issue is with regards to many degrees of freedom, or concerns with the number of dimension beyond which the system is regarded as many. We show that there is a critical number at 5~10, beyond which attractors often touch with the basin boundary. The number is discussed as magnitude relation between exponential and factorial.
The fourth problem is motivated by cell differentiation. We give examples of such differentiation of states, which are represented as internal bifurcation in coupled dynamical systems.
Last, if I have time, design of robust dynamical systems is discussed in relationship with evolution of gene regulatory networks, where linking between robustness to noise and to structural change is shown which, implies evolutionary congruence between developmental and mutational robustness.
The substrate for heredity, DNA, is chemically rather inert. However, it bears one of the elements of information that specify the form of the organism. How can a form be specified, starting from DNA ? Recent observations indicate that the dynamics of transcription — the process that decodes the hereditary information — can imprint forms of a certain topological class onto DNA. This topology allows both to optimize transcription and to facilitate the concerted change of the transcriptional status in response to environmental modifications. To the best of our knowledge, this morphogenetic event is first on the path from DNA to organism.
The solenoidal model of chromosomes will be described and anchored in transcriptomic data. New methods to analyse those data will be reported.
Low temperature Potts model is the simplest statistical mechanical model of q co-existing phases. In this talk we shall explain how to prove that in two dimensions Potts equilibrium crystal shapes are always smooth and strictly convex. In other words, in two dimensions Potts models do not undergo roughening transition. Since the models in question (except for the Ising q=2 case) are not exactly soluble, the proof relies on an intrinsic probabilistic analysis of random phase separation lines. The main step of the latter is to develop finite scale renormalization procedures which enable a coding of the interface distribution in terms of Ruelle operator for full shifts over countable alphabets.
Joint work with Massimo Camapanino and Yvan Velenik
The theory of gravity is non-renormalizable. Thus we either have to modify the theory or to face the problem of dealing with a non-renormalizable theory if we insist that gravity should be quantized. Here we follow the latter path and define quantum gravity as a non-perturbative sum over geometries in an exceedingly simple way. The approach is background independent but neverless we "observe" in computer simulations the emergence of a four-dimensional universe which can be viewed as a classical universe with superimposed quantum fluctuations. The results might be related to the ideas of Hartle, Hawking and Vikenkin about the creation of our universe from nothing.
The periodic Lorentz gas describes a particle moving in a periodic array of spherical scatterers, and is one of the fundamental mathematical models for chaotic diffusion in a periodic set-up. In this lecture (aimed at a general mathematical audience) I describe the recent solution of a problem posed by Y. Sinai in the early 1980s, on the nature of the diffusion when the scatterers are very small. The problem is closely related to some basic questions in number theory, in particular the distribution of lattice points visible from a given position, cf. Polya's 1918 paper "[...] ueber die Sichtweite im Walde" (Polya's orchard problem). The key technology in our approach is measure rigidity, a branch of ergodic theory that has proved valuable in recent solutions of other problems in number theory and mathematical physics, such as the value distribution of quadratic forms at integers, quantum unique ergodicity and questions of diophantine approximation.
(This lecture is based on joint work with A. Strombergsson, Uppsala.)
Engineers are forced to develop processes and products in a set time frame. Thus, they have to provide quantitative and reliable predictions despite incomplete knowledge on the underlying physics. In the past, empirical equations or phenomenological models were used, but this required time consuming validation and expensive pilot plant tests for each material or any change of the processing window. The trend to shorter product life cycles and an increasing market pressure to reduce costs demand new tools for an assessment and prediction of material properties for a wide range of processing conditions without complete validation. To meet these requirements computer simulations have been employed recently. Respective codes have to accommodate multiscale information, from atomistics to a macroscopic scale and generate an output on an engineering level. This so called ‘integral materials modeling’ will be presented for the example of aluminum sheet processing.
We review the problem of heat conduction in Hamiltonian systems and discuss the derivation of Fourier's law from a truncated set of equations for the stationary state of a mechanical system coupled to boundary noise.
Confining geometries are realised in nature and technology in manifold ways, e.g. in molecular compartments of (living) biological cells, in nanoporous media, in thin (~ 10 nm) polymer layers or in long channels with nanometer diameter. In the talk briefly the dynamics of low molecular weight systems (as model system) will be discussed and then emhpasis is given to polymeric systems - prepared as isolated chains, as thin glassy layers or as brushes. It will turn out that the lengthscale on which molecular fluctuations take place becomes essential and that the confinement can given rise to new relaxational modes. References:1. Kremer, F., A. Huwe, M. Arndt, P. Behrens and W. Schwieger "How many molecules form a liquid ?" J. Phys. Condens Matter 11, A175-A188 (1999)1. Huwe, A., F. Kremer, P. Behrens and W. Schwieger "Molecular dynamics in confining space: From the single molecule to the liquid state" Phys. Rev. Lett. 82, 11, p.2338-2341 (1999) 2. Spange, St., A. Gräser, A. Huwe, F. Kremer, C. Tintemann and P. Behrens "Cationic host-guest polymerization of N-vinylcarbazole and vinyl ethers in MCM-41, MCM-48 and Nanoporous glasses", Chem. European Journ. 7, No. 1 pp. 3722-3728 (2001) 3. Hartmann, L., W. Gorbatschow, J. Hauwede and F. Kremer "Molecular dynamics in thin films of isotactic PMMA", Eur.Phys. J.E, 8, pp.145-154 (2002) 4. Kremer,F., A. Huwe, A. Schönhals and S.A. Rozanski "Molekular Dynamics in Confining Space", Chapt. VI in "Broadband Dielectric Spectroscopy", (Eds. F. Kremer, A. Schönhals), Springer-Verlag Berlin (2002) 5. Serghei, A., F. Kremer " Confinement-induced relaxation process in thin films of cis-polyisoprene " Phys. Rev. Letters Vol. 91. No.16, pp. 165702-1-165702-4 (2003) 6. Kremer, F., L. Hartmann, A. Serghei, P. Pouret and L. Léger "Molecular dynamics in thin grafted and spin-coated polymer layers" EJP E Vol.12, no.1, pp.139-142 (2003) 7. Serghei, A., F. Kremer, W. Kob " Chain conformation in thin polymer layers as revealed by simulations of ideal random walks" EJP E Vol.12, No. 1 pp. 143-146 (2003)
The essence of the second law of thermodynamics is the statement that all adiabatic processes (slow or violent, reversible or not) can be quantified by a unique entropy function, S, on the equilibrium states of all macroscopic systems, whose increase is a necessary and sufficient condition for such a process to occur. It is one of the few really fundamental physical laws in the sense that no deviation, however tiny, is permitted and its consequences are far reaching. Since the entropy principle is independent of any statistical mechanical model, it ought to be derivable from a few logical principles without recourse to Carnot cycles, ideal gases and other assumptions about such things as 'heat', 'hot' and 'cold', 'temperature', 'reversible processes', etc. Indeed, temperature is a consequence of entropy rather than the other way around. In this lecture on joint work with Jakob Yngvason, the foundations of the subject and the construction of entropy from a few simple, physical principles will be presented. (For background, see: Notices of the Amer. Math. Soc. 45, p.571 (1998), Physics Today 53, p.32 (April 2000) and Physics Reports 310, p.1 (1999).)
Control of spatiotemporal chaos is one of the central problems of nonlinear dynamics. Recently, we have reported [1] suppression of chemical turbulence by global delayed feedback in the catalytic reaction of CO oxidation on platinum single-crystal surfaces. When feedback intensity was increased, spiral-wave turbulence was transformed into new intermittent chaotic regimes with cascades of reproducing and annihilating local structures on the background of uniform oscillations. The global feedback further led to the development of cluster patterns and standing waves and to the stabilization of uniform oscillations. These findings are theoretically reproduced in our simulations of the complex Ginzburg-Landau equation with global feedback [2,3] and of the realistic model of the CO oxidation reaction [4].[1] M. Kim. M. Bertram, M. Pollmann, A. von Oertzen, A.S. Mikhailov, H.H. Rotermund, G. Ertl, Science 292 (2001) 1357[2] D. Battogtokh, A.S. Mikhailov, Physica D 90 (1996) 84[3] D. Battogtokh, A. Preusser, A.S. Mikhailov, Physica D 106 (1997) 327[4] M. Bertram, A.S. Mikhailov, Phys. Rev. E 63 (2001) 066102
In this talk I will describe some analytical problems in Quantum Field Theory (QFT) and some of the recent results and approaches. I will not assume any prior knowledge of the subject and I will try to show how it arises from Classical Field Theory, i.e. partial differential equations. In other words I will view QFT as Quantum Mechanics of infinitely many degrees of freedom or of extended objects (strings, surfaces, etc).
Since any biological cell more advanced in evolution than a bacterium, depends in its internal structure and organization on the cytoskeleton a highly dynamic polymer network within the cell interior our group strives to understand the physics of the cytoskeleton. We have developed novel laser-based nanomanipulation tools to take a look into cells and to investigate the cytoskeleton. We particularly examine to which extent changes in the cytoskeleton characterize the progression of cancer from precancer to metastasis and how the cytoskeleton can be used to control neuronal growth. Our ultimate goal is the development of a tabletop device for quick cancer diagnosis, which accomplishes cancer's earliest detection and precise determination of its stage existing techniques fail in both aspects. Moreover, our research group plans to build well-controlled circuits of genuine neurons and to develop novel therapies in neuroprothetics.
Unravelling the mechanisms of energy transfer on a molecular level is one of the central problems of chemical reaction kinetics. Most intriguing from the chemists point of view is the connection between dynamical and structural properties. Although empirically well established this relationship leaves many open questions. Which are its microscopic foundations ? Are there transferable properties of functional groups and how do they determine the course of chemical reactions ?
Modern spectroscopy opens a unique approach to these problems. The key is provided by the interpretation of molecular spectra in terms of explicit quantum-mechanical models of the underlying molecular motion. Studies of OH- and NH2 -groups in different environments demonstrate how experiment and theory combine to draw a detailled picture of the molecular quantum-dynamics. In perfect analogy to the separation of electronic and nuclear motion in the Born-Oppenheimer approximation characteristic motions of individual structural features are adiabatically separated from the overall system dynamics. This phenomenon of vibrational adiabaticity could play a central role in the understanding of the microscopic foundations of empirical structure-reactivity relationships.
Die gesamte von der Sonne gelieferte Energie, die den Boden der Atmosphäre erreicht, wird an dieser vielgestaltigen Bodenoberfläche absorbiert. Von hier aus wird die unterste Schicht der Atmosphäre mit Wärme versorgt. Wie über einer Heizplatte entstehen dabei Auftriebsgebiete in denen erwärme Luft in die Höhe steigt, und Abstiegsgebiete, in denen aus höheren Schichten der Atmosphäre kalte Luft die warme am Boden ersetzt. Zwischen Auf- und Abtriebsgebieten entwickeln sich horizontale Ausgleichsbewegungen, deren geringe Bewegungsgeschwindigkeiten oft durch großräumige Windsysteme überlagert werden. Aber auch die unterschiedliche Gestalt der Unterlage kann zu einem auf und ab der Luftbewegungen führen. Letztlich wird so für einen Energieaustausch zwischen der warme Erdoberfläche und der darüber gelagerten kalten Luft gesorgt. Hält man ein empfindliches Messgerät in diese Strömung, dann würde man am Messgerät ein ziemlich unregelmäßiges Verhalten solcher meteorologischer Größen wie der Windgeschwindigkeit oder der Lufttemperatur beobachten. Eine solche Unregelmäßigkeit dieser Luftbewegungen wird als Turbulenz bezeichnet und der dadurch induzierte Energietransport führt zur Herausbildung einer konvektiven turbulenten Grenzschicht.
Man versucht, der Vielgestaltigkeit dieser kleinräumigen Bewegungen mit statistischen Größen, wie Mittelwerte, Varianzen und Kovarianzen eine gewisse Ordnung zu geben. Solche Theorien, die den turbulenten Energieaustausch mit statistischen Mitteln beschreiben, wurden in vielen Naturexperimenten überprüft. Sie haben ihre Leistungsfähigkeit bei der Modellierung des turbulenten Energieaustausches innerhalb von numerischen Wetterprognosemodellen unter Beweis gestellt. Schaut man den turbulenten Energietransport etwas genauer an, dann kann die den Energieaustausch bewerkstelligende Turbulenz in die Überlagerung einer Vielzahl unterschiedlich großer Wirbelstrukturen aufgelöst werden. Es macht jedoch keinen Sinn, jeden einzelnen dieser die Energie transportierenden Wirbel in einem Wettermodell zu berücksichtigen, wenn man z.B. das Ziel hat den Energieumsatz zwischen Erdoberfläche und Atmosphäre über einem ganzen Kontinent zu berechnen. Schränkt man die Größe des Zielgebietes ein, dann kann jedoch ein Verfahren sinnvoll sein, das möglichst viele dieser energietragenden Strukturen direkt berechnen kann. Früher unmöglich, sind solche Berechnungsverfahren heute durch die enorme Erhöhung der Rechnerleistungen Stand der Dinge. Solche numerische Verfahren, sind in der Meteorologie als Large Eddy Simulationen bekannt. Large Eddy Simulationen zeigen diese oben beschrieben faszinierenden Strukturen, diesen Wechsel an Auf- und Abwindgebieten innerhalb einer konvektiven Grenzschicht der Atmosphäre, die man sich so leicht vorstellen kann, die aber in der Natur nur schwer zu beobachten sind.
Cells in tissues or body fluids cooperate by means of intricate webs of proteins encoding the beahavior of each individual cell. As a metaphor, the working of these networks can be regared as a language, in which the relative spatial position of proteins follows strict rules to enable the cells to communicate on a common "semantic" basis. To understand the rules by which cells make local collective decisons it is essential to decipher the underlying portein networks. This can only be done by tracing out these networks directly by means of topological proteomics technology working on the level of each individual cell in intact cell systems, such as tissues. The resulting data are enormous challenge to informatics and mathematics approaches related to pattern recognition, matching, interpretation, and modelling.
Because calibrated light curves of thermonuclear (Type Ia) supernovae have become a major tool to determine the local expansion rate of the Universe, and also its geometrical structure, considerable attention has been given to models of these events over the past couple of years. There are good reasons to believe that perhaps most Type Ia supernovae are the explosions of white dwarf stars, consisting mainly of carbon and oxygen only, that have approached the Chandrasekhar mass, Mchan ≈ 1.39 M⊙, and are disrupted by thermonuclear fusion of carbon and oxygen. Recent progress in modeling Type Ia supernovae as well as several of the still open questions are addressed in this talk. Although the main emphasis will be on studies of the explosion mechanism itself and on the related physical processes, including the physics and numerical modeling of turbulent nuclear combustion in degenerate stars, we also discuss observational implications and constraints, including consequences for cosmology.
Zur Beurteilung der Funktion eines Biomoleküles innerhalb der Zellprozesse muss man Informationen über seine dreidimensionale räumliche Struktur und über die Flexibilität dieser Struktur haben.
Die Beschreibung der Dynamik (und damit der Flexibilität) von Biomolekülen führt auf Mehrskalen-Probleme, bei welchen schnelle Mikroskalen nichtlinear mit langsamen Makroskalen verkoppelt sind. Aufgrund dieser Kopplung ist die Mikroskala für die langsame Dynamik effektiv von grosser Bedeutung, d.h. sie kann nicht trivial ausgemittelt oder -gefiltert werden. Auf der anderen Seite ist der Anwender oft nicht an den Details der Mikroskala interessiert: diese sind nur chemisch irrelevante, kleine Oszillationen des quasi formstabilen Molekülgerüsts, während die Makrodynamik durch Übergänge zwischen global'' deutlich verschiedenen Formen des Molekülgerüsts, den sogenannten Konformationen des Moleküls, gekennzeichnet ist.
Der Vortrag stellt eine Methode zur direkten Berechnung dieser Konformationen vor, die die Ankopplung der Mikroskalen berücksichtigt, ohne ihre explizite Simulationen über makroskopisch lange Zeiträume zu erfordern. Dazu wird zuerst eine Beschreibung des Problems in Rahmen der statistischen Mechanik entwickelt, die auf die Konstruktion eines die Übergangswahrscheinlichkeiten zwischen den Konformationen beschreibenden Markov-Operator führt.
Es stellt sich heraus, dass die Konformationen aus den Eigenvektoren zu einem Cluster von isolierten Eigenwerten dieses Operators bestimmbar sind. Die numerische Berechnung der Konformationen erfordert also eine Diskretisierung des Eigenwert-Probleme zu diesem Operator, was wegen der riesigen Anzahl von Freiheitsgraden nur mit Hilfe eines speziellen Monte-Carlo-Verfahrens möglich ist.
We present a class of constitutive updates for general viscoplastic solids including such aspects of material behavior as finite elastic and plastic deformations, non-Newtonian viscosity, rate-sensitivity and arbitrary flow and hardening rules. The distinguishing characteristic of the proposed constitutive updates is that, by construction, the corresponding incremental stress-strain relations derive from a pseudo-elastic strain-energy density. This in turn confers the incremental boundary value problem a variational structure. In particular, the incremental deformation mapping follows from a minimum principle. In crystals exhibiting latent hardening, the energy function is nonconvex and has wells corresponding to single-slip deformations. This favors microstructures consisting locally of single slip. We develop a micromechanical theory of dislocation structures and finite deformation single crystal plasticity based on the direct generation of deformation microstructures and the computation of the attendant effective behavior. Specifically, we aim at describing the lamellar dislocation structures which develop at large strains under monotonic loading. These microstructures are regarded as instances of sequential lamination and treated accordingly. The present approach is based on the explicit construction of microstructures by recursive lamination and their subsequent equilibration in order to relax the incremental constitutive description of the material. The microstructures are permitted to evolve in complexity and fineness with increasing macroscopic deformation. The dislocation structures are deduced from the plastic deformation gradient field by recourse to Kröner's formula for the dislocation density tensor. The theory is rendered nonlocal by the consideration of the self-energy of the dislocations. Selected examples demonstrate the ability of the theory to generate complex microstructures, determine the softening effect which those microstructures have on the effective behavior of the crystal, and account for the dependence of the effective behavior on the size of the crystalline sample, or size effect. In this last regard, the theory predicts the effective behavior of the crystal to stiffen with decreasing sample size, in keeping with experiment. In contrast to strain-gradient theories of plasticity, the size effect occurs for nominally uniform macroscopic deformations.
Non-linear field equations such as the KPZ equation for deposition and the Navier-Stokes equation for hydrodynamics are discussed by the derivation of transport equations for the correlation function of the field h(r,t), i.e., (h(r,t) h(r',t')), where h satisfies a diffusion equation driven by a noise, f, defined as noise with a given spectrum, and containing a non linear term, Mhh, which couples to the field itself. In previous work, an equation for the steady state correlation function 𝜙(r-r') = (h(r,t) h(r',t')) was derived and solved to give a power law solution in an intermediate range of k, i.e. 𝜙(k) ˜ |k|α.
In this paper (joint work with Moshe Shwartz), the probability distribution for h(r,t) or hk,w is derived, the procedure having the same relation to the static distribution as Lagrangian mechanics has to the Hamiltonian. A conservative system has a static solution exp(-H/kT), but there is no equivalent for the distribution of histories, so there is little study of this approach. However, since approximations are essential, the Lagrangian method is used, and is more powerful than the usual Hamiltonian → Liouville's equation → Boltzmann equation route.
The approximate equation is derived and solved in the usual conditions, and takes the form 𝜙k(t) = 𝜙kexp(-ktγ).
Quantum Field Theory needs regularization and renormalization. In order to cure diseases, ideas from Noncommutative Geometry might help. Three types of deformations are used. - We mention matrix geometry and the Fuzzy Sphere; a regularization is found which respects symmetries. - Field theory on noncommutative spaces has still divergences. Recent attempts to prove renormalizability are reviewed too.
Some recent models for inhomogeneous spatial point processes with interaction will be reviewed. The focus is on models derived from homogeneous Markov point processes. For some of the models, the interaction is location dependent. A new type of transformation related model with this property is also suggested. The statistical inference based on likelihood and pseudolikelihood is discussed for the different models. In particular, it is shown that for transformation models, the pseudolikelihood function can be decomposed in a similar fashion as the likelihood function.
Before the review, I will also give a summary of the research at Laboratory for Computational Stochastics.
The evolution of populations under the joint action of mutation and selection is, in the framework of classical population genetics, described by systems of ordinary differential equations. These equations carry over to molecular evolution if alleles are identified with sequences, and a suitable mutation model is specified. The reulting systems are, however, very large and hard to treat.
Matters are simplified by a connection to statistical physics. It may be shown that the mutation-reproduction matrix of the evolution model is exactly equivalent to the Hamiltonian of an Ising quantum chain. Here, the mutation rate corresponds to the temperature, and the fitness of a sequence may be identified with the interaction energy of the spins within the chain. Hence, the methods of statistical physics may be used to diagonalize the mutation-reproduction matrix, and thus solve the evolution model exactly. However, the quantum-mechanical states do not translate directly into the probabilities of the evolution model, since they rely on the quantum-mechanical (as opposed to classical) probability concept; here, the methods require some modification.
The secondary structures of nucleic acids provide a unique computer model for investigating the most important aspects of their structural and evolutionary biology. Secondary structures, defined as the lists of base pairing contacts in RNA or DNA molecules, are a coarse-grained representation of the 3D structures; nevertheless they capture many important features of the molecules. The existence of efficient algorithms for solving the folding problem, i.e., for predicting the secondary structure given only the sequence, allows a detailed analysis of the model by means of computer simulations. The notion of a "landscape" underlies both the structure formation (folding) and the (in vitro) evolution of RNA.
Evolutionary adaptation may be seen as hill climbing process on a fitness landscape which is determined by the phenotype of the RNA molecule (within the model this is its secondary structure) and the selection constraints acting on the molecules. We find that a substantial fraction of point mutations do not change an RNA secondary structure. On the other hand, a comparable fraction of mutations leads to very different structures. This interplay of smoothness and ruggedness (or robustness and sensitivity) is a generic feature of both RNA and protein sequence-structure maps. Its consequences, "shape space covering" and "neutral networks" are inherited by the fitness landscapes and determine the dynamics of RNA evolution. Punctuated equilibria at phenotype level and a diffusion-like evolution of the underlying genotypes are a characteristic feature of such models.
The folding dynamics of particular RNA molecule can also be studied in a meaningful way based on secondary structures. Given an RNA sequence, we consider the energy landscape formed by all possible conformations (secondary structures). A straight-forward implementation of the Metropolis algorithm is sufficient to produce a quite realistic folding kinetics, allowing to identify meta-stable states and folding pathways. Just as in the protein case there are good and bad folders which can be distinguished by the properties of their energy landscapes.